[jira] [Commented] (YARN-7863) Modify placement constraints to support node attributes

2018-01-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346391#comment-16346391
 ] 

Arun Suresh commented on YARN-7863:
---

[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tags on nodes with attribute java_version = 1.8.
# Supporting a node attribute as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7863) Modify placement constraints to support node attributes

2018-01-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346391#comment-16346391
 ] 

Arun Suresh edited comment on YARN-7863 at 1/31/18 7:59 AM:


[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tag "foo" on nodes with attribute java_version = 1.8.
# Supporting node attributes as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled as part of YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 


was (Author: asuresh):
[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tag "foo" on nodes with attribute java_version = 1.8.
# Supporting node attributes as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7863) Modify placement constraints to support node attributes

2018-01-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346391#comment-16346391
 ] 

Arun Suresh edited comment on YARN-7863 at 1/31/18 7:59 AM:


[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tag "foo" on nodes with attribute java_version = 1.8.
# Supporting node attributes as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 


was (Author: asuresh):
[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tags on nodes with attribute java_version = 1.8.
# Supporting node attributes as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346394#comment-16346394
 ] 

genericqa commented on YARN-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-7859 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908482/YARN-7859-v1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19542/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7863) Modify placement constraints to support node attributes

2018-01-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346399#comment-16346399
 ] 

Wangda Tan commented on YARN-7863:
--

[~asuresh] / [~sunilg] , is there anything need to be done before merge or 
3.1.0 release? If there's no API changes needed, we can delay all related 
implementations once YARN-3409 finished.

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-01-31 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346404#comment-16346404
 ] 

Tao Yang commented on YARN-7494:


Thanks [~cheersyang] for your mention.

Some thoughts (parts are the same with those in my last comments) from my side:
 # Sorting by nodeLookupPolicy for every allocation process is expensive. We 
have planned to add new service to manage and periodically refresh 
per-ordering-policy ordered list of nodes, scheduler can filter candidate nodes 
from ordered node lists for app request and need no more sorting. So that we 
can define cluster-level(or default) ordering policy to achieve better load 
balance or other requirements and it's better for the performance of scheduler.
 # This patch iterates all partition nodes to create new 
PartitionBasedCandidateNodeSet instance for every schedule process in 
CapacityScheduler#getCandidateNodeSet. I think we can keep a single instance to 
avoid always creating it. Further more, we can replace it with ordered node 
list if the plan is acceptable.
 # This patch remains as it is to iterate all nodes and trigger the schedule 
process for every node in CapacityScheduler#schedule. It's property for 
scheduler before which dose allocation for single node. But for multiple nodes, 
I think it's better to iterates all partitions to trigger the schedule process, 
we can move multiNodePlacementEnabled check branch from 
CapacityScheduler#getCandidateNodeSet to CapacityScheduler#schedule, do 
different iteration and logic for different choose.
 # CandidateNodeSet#getAllNodes returns Map type, and it seems no 
need to find node by NodeId, perhaps we can change it to Set or List to support 
getting ordered nodes.

Thanks.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Fix Version/s: (was: 2.6.0)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: (was: YARN-7859-v1.patch)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Affects Version/s: (was: 2.6.0)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7802) [UI2] Application regex search did not work properly with app name

2018-01-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-7802.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks [~Sreenath]

> [UI2] Application regex search did not work properly with app name
> --
>
> Key: YARN-7802
> URL: https://issues.apache.org/jira/browse/YARN-7802
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-7802.1.patch, YARN-7802.2.patch
>
>
> Steps:
> 1) Start yarn services with "yesha-hbase-retry-2"
> 2) put regex = yesha-hbase-retry-2
> http://host:8088/ui2/#/yarn-apps/apps?searchText=yesha-hbase-retry-2
> Here, the application does not gets listed. The regex work with 
> "yesha-hbase-retry-" input but does not work with full app name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7703) Apps killed from the NEW state are not recorded in the state store

2018-01-31 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346788#comment-16346788
 ] 

lujie commented on YARN-7703:
-

after reading the discussion in 
[YARN-1618|https://issues.apache.org/jira/browse/YARN-1618], I think current 
patch is not correct,cancel the patch.

> Apps killed from the NEW state are not recorded in the state store
> --
>
> Key: YARN-7703
> URL: https://issues.apache.org/jira/browse/YARN-7703
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Jason Lowe
>Assignee: lujie
>Priority: Major
> Attachments: YARN-7703_0.patch
>
>
> While reviewing YARN-7663 I noticed that apps killed from the NEW state skip 
> storing anything to the RM state store.  That means upon restart and recovery 
> these apps will not be recovered, so they will simply disappear.  That could 
> be surprising for users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346722#comment-16346722
 ] 

Sunil G commented on YARN-7757:
---

Thanks [~cheersyang]. Makes sense.

+1 to the patch. I will commit later today if no objections.

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, YARN-7757-YARN-3409.005.patch, 
> nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7837) TestRMWebServiceAppsNodelabel.testAppsRunning is failing

2018-01-31 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-7837.
--
Resolution: Duplicate

> TestRMWebServiceAppsNodelabel.testAppsRunning is failing
> 
>
> Key: YARN-7837
> URL: https://issues.apache.org/jira/browse/YARN-7837
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Haibo Chen
>Priority: Major
>
> org.junit.ComparisonFailure: partition amused 
>  Expected :\{"memory":1024,"vCores":1}
>  Actual   
> :{"memory":1024,"vCores":1,"resourceInformations":{"resourceInformation":[
> {"maximumAllocation":9223372036854775807,"minimumAllocation":0,"name":"memory-mb","resourceType":"COUNTABLE","units":"Mi","value":1024}
> ,\{"maximumAllocation":9223372036854775807,"minimumAllocation":0,"name":"vcores","resourceType":"COUNTABLE","units":"","value":1}]}}
>     
>  
>  
>     at org.junit.Assert.assertEquals(Assert.java:115)
>      at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.verifyResource(TestRMWebServiceAppsNodelabel.java:218)
>      at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.testAppsRunning(TestRMWebServiceAppsNodelabel.java:201)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>      at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>      at java.lang.reflect.Method.invoke(Method.java:497)
>      at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>      at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>      at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>      at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>      at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>      at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>      at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>      at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>      at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>      at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>      at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>      at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>      at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>      at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>      at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>      at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>      at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>      at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>      at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>      at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-31 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346985#comment-16346985
 ] 

Miklos Szegedi commented on YARN-7857:
--

Thank you, [~Jim_Brennan] for raising this. Indeed, you are right that it is 
not a simple stack overflow that causes YARN-7796. However, I looked into it 
and it might be the right fix.

The article you mentioned above is a bit coarse, and it does not tell much 
about the details. I reproduced your issue with the exact RHEL versions you 
mentioned. At the time of the crash we have the following values:
{code:java}
RBX=RSP=0x7fffe320=BOTTOM-0x1CE0
SIZE=128K=0x2
RDX=(SIZE + 2*15)/16*16=0x20010
RAX=RSP-(4K-8)-n*4K=0x7ffdd328=RSP-0x20FF8 << crashing writing 0 here
RCX=RSP-((SIZE + 2*15)/16*16+3K-8)=0x7ffdb318=RSP-0x23008
BUFFER=(RSP - SIZE + 15)/16*16=0x7FFDE310
{code}
The stack check code writes a 0 to every page from RSP-(4K-8) down until RCX 
using RAX as the iterator, which is RSP-0x23008 at the time of the crash. The 
eventual location of the buffer is a bit above of the crash but not too much.
 However, RSP is just 2 pages above the bottom of the stack and we try to check 
just a few pages below the eventual buffer location, so the write should 
succeed. In fact, when I try to reproduce the same issue (rh68 built binary on 
rh74) with a 110K buffer instead of 128K, it works.
 As a conclusion, the stack check code seems to be legitimate. However, the 
code might address the same memory later ending up with the same crash without 
stack checking. The RHEL 7.4 code does an or of each location with itself and 
0. Since the stack check code is similar to what Meltdown does, I am wondering, 
if we ran into some kernel protection. Moving the buffer to the heap removes 
all risk running into this protection.

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7864) YARN Federation document has error. spelling mistakes.

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346660#comment-16346660
 ] 

genericqa commented on YARN-7864:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908555/YARN-7864.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 90c139a05d18 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5206b2c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 407 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19546/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN Federation document has error. spelling mistakes.
> --
>
> Key: YARN-7864
> URL: https://issues.apache.org/jira/browse/YARN-7864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.9.0, 3.0.0, 2.9.1
> Environment: 3.0.0
>Reporter: Yiran Wu
>Priority: Major
> Attachments: YARN-7864.001.patch, image-2018-01-31-19-01-12-739.png
>
>
> YARN Federation document has error. spelling mistakes.
> yarn.resourcemanger.scheduler.address -> 
> yarn.resourcemanager.scheduler.address
>  
> !image-2018-01-31-19-01-12-739.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7865) Node attributes documentation

2018-01-31 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7865:
-

 Summary: Node attributes documentation
 Key: YARN-7865
 URL: https://issues.apache.org/jira/browse/YARN-7865
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: documentation
Reporter: Weiwei Yang


We need proper docs to introduce how to enable node-attributes how to configure 
providers, how to specify script paths, arguments in configuration, what should 
be the proper permission of the script and who will run the script. Also it 
would be good to add more info to the description of the configuration 
properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346683#comment-16346683
 ] 

Sunil G commented on YARN-7757:
---

[~cheersyang] Thanks for the updated patch

One quick doubt related to NodeScriptRunner. 
{code:java}
53  this.exec = new Shell.ShellCommandExecutor(
54  execScript.toArray(new String[execScript.size()]), null, null,
55  scriptTimeout);{code}
Does this  need to be run under some privileged mode or something similar ? As 
per current impl, this script will be run as NodeManager user. I am not sure 
whether any use case is there to run as privileged user to get some i/p (such 
as some system info)

Could we give a bit more detailed comment in the script config section (such as 
yarn-default etc) to indicate the format of script o/p. This is for the 
easiness of in config.

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, YARN-7757-YARN-3409.005.patch, 
> nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7802) [UI2] Application regex search did not work properly with app name

2018-01-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7802:
--
Summary: [UI2] Application regex search did not work properly with app name 
 (was: Application regex search did not work properly with app name)

> [UI2] Application regex search did not work properly with app name
> --
>
> Key: YARN-7802
> URL: https://issues.apache.org/jira/browse/YARN-7802
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-7802.1.patch, YARN-7802.2.patch
>
>
> Steps:
> 1) Start yarn services with "yesha-hbase-retry-2"
> 2) put regex = yesha-hbase-retry-2
> http://host:8088/ui2/#/yarn-apps/apps?searchText=yesha-hbase-retry-2
> Here, the application does not gets listed. The regex work with 
> "yesha-hbase-retry-" input but does not work with full app name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7802) Application regex search did not work properly with app name

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346687#comment-16346687
 ] 

Sunil G commented on YARN-7802:
---

+1 Committing shortly.

> Application regex search did not work properly with app name
> 
>
> Key: YARN-7802
> URL: https://issues.apache.org/jira/browse/YARN-7802
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-7802.1.patch, YARN-7802.2.patch
>
>
> Steps:
> 1) Start yarn services with "yesha-hbase-retry-2"
> 2) put regex = yesha-hbase-retry-2
> http://host:8088/ui2/#/yarn-apps/apps?searchText=yesha-hbase-retry-2
> Here, the application does not gets listed. The regex work with 
> "yesha-hbase-retry-" input but does not work with full app name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Fix Version/s: 3.0.0

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7802) [UI2] Application regex search did not work properly with app name

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346739#comment-16346739
 ] 

Hudson commented on YARN-7802:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13586 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13586/])
YARN-7802. [UI2] Application regex search did not work properly with app 
(sunilg: rev 64344c345d69bcdcf72a08cec326e6d1a5c25fab)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes/table.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-tools/yarn-conf.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flowrun/metrics.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/components.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-flowrun/metrics.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/timeline-view.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/yarn.lock
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/components.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-services.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-component-instances/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-component-instances/info.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/timeline-view.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-tools/yarn-conf.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-apps/apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flowrun/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-flowrun/info.hbs


> [UI2] Application regex search did not work properly with app name
> --
>
> Key: YARN-7802
> URL: https://issues.apache.org/jira/browse/YARN-7802
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-7802.1.patch, YARN-7802.2.patch
>
>
> Steps:
> 1) Start yarn services with "yesha-hbase-retry-2"
> 2) put regex = yesha-hbase-retry-2
> http://host:8088/ui2/#/yarn-apps/apps?searchText=yesha-hbase-retry-2
> Here, the application does not gets listed. The regex work with 
> "yesha-hbase-retry-" input but does not work with full app name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346762#comment-16346762
 ] 

wangwj commented on YARN-7859:
--

In my cluster,I did an experiment.
There are two queues in my cluster:
 !screenshot-2.png! 
And the configuration associated with this issue are : 
 !screenshot-3.png! 
And I run two jobs in each queue.
Of course,before the experiment, I add some log In the code.
After two jobs completed,I intercepted some of the logs...
>From the log we can see one queue has scheduled mandatory if the queue was not 
>scheduled within 3S.

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-01-31 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346940#comment-16346940
 ] 

Shane Kumpf commented on YARN-7221:
---

Thanks [~eyang]! Could we consider adding ACLs in YARN to determine if the user 
is allowed to run privileged containers or disable the user override? I'm not a 
huge fan of relying on sudo to provide the ACLs for YARN. There was already 
some work done here around privileged container ACLs, but it needs to be 
revisited. I'm also not sure that these rules apply to all use cases, so 
allowing users/containers that need these features to "opt-in" or "opt-out" 
would give us the flexibility needed without making assumptions on how users 
will use the system, assuming it can be done in a safe way.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: screenshot-3.png

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346808#comment-16346808
 ] 

Sunil G commented on YARN-4606:
---

[~maniraj...@gmail.com]

You can look into {{SchedulerApplicationAttempt.isWaitingForAMContainer()}} to 
know an app is pending for its AM container. Hence  new method 
{{activeApplication()}} in {{AppSchedulingInfo is not needed.}}

Ideally we have {{ActiveUsersManager}} which has all the active users in that 
cluster (including apps which are pending). I thin we can have 
{{activeUsersOfPendingApps along with}} {{activeUsers}} . Hence in case of 
scheduling we can depend only on activeUsers. And when we need to know all 
active users in cluster (for user-limit computation etc) we might need to use 
activeUsers+activeUsersOfPendingApps.

 

cc/ [~leftnoteasy] [~jlowe] [~eepayne] Could you please help to check this and 
share your thoughts.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346882#comment-16346882
 ] 

genericqa commented on YARN-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 32 unchanged - 0 fixed = 42 total (was 32) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Useless assignment in return from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(Schedulable,
 Schedulable)  At 
FairSharePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(Schedulable,
 Schedulable)  At FairSharePolicy.java:[line 112] |
|  |  Useless assignment in return from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(Schedulable,
 Schedulable)  At 
FairSharePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(Schedulable,
 Schedulable)  At FairSharePolicy.java:[line 118] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: YARN-7859-v1.patch

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: screenshot-2.png

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png, screenshot-2.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346717#comment-16346717
 ] 

Weiwei Yang commented on YARN-7757:
---

Hi [~sunilg]

Thanks, regarding to your comments
# It is fine to run script as NM user, that should be enough for most cases. 
Most of node info should be readable by NM user and we'll explicitly doc out 
the runner of the script. User will get that. But thanks for pointing this out.
# It makes sense, but since it's a trivial change I suggest we track it via 
part of our doc task, I've noticed that in YARN-7865, please check.

Thanks

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, YARN-7757-YARN-3409.005.patch, 
> nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Affects Version/s: 3.0.0

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-01-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346727#comment-16346727
 ] 

Weiwei Yang commented on YARN-7494:
---

Hi [~sunilg]

Some updates today, I took a deeper look at the patch. Besides the comments I 
mentioned earlier, there is some more problems we need to address

1. Discussed with Wangda offline, we agree that to make the "multi-node-lookup" 
be configurable per-app, per-queue and per-cluster(scheduler). Reason: this 
will be essential for a production cluster, we need the capability to enable 
this feature step by step, e.g first enable for 1 apps, then 10 apps, then 1 
queue and eventually the entire cluster. And this is a change to the factory 
class, won't be too much.

2.(My opinion) I am not in favor of the config name: 
"yarn.capacity.scheduler.multi-node-placement-enabled", it does not seem to be 
informative. If we are going to implement #1, can we configure it to be 
something like,

// scheduler
 yarn.capacity.sorting-nodes.policy.class
 ...DefaultSortingNodesPolicy (which returns a single node set)
 ...NodeUtilizationBasedSortingPolicy

// queue
 yarn.capacity.queue..sorting-nodes.policy.class
 NodeUtilizationBasedSortingPolicy

// app
 ENV string

So default is 
{{yarn.capacity.sorting-nodes.policy.class=DefaultSortingNodesPolicy}}, queue 
and app can override this policy.

3. API level, I think we need a {{sorting nodes service}} like [~Tao Yang] and 
I both mentioned, because you have to run some policy to sort nodes in some 
interval right? AppPlacementAllocator should retrieve candidate nodes from this 
service, not directly from a policy, a policy should be just a sorting 
algorithm.

We can setup a meeting to discuss this if you are available. Thanks.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Environment: (was:     The environment of my company is  
hadoop2.6.0-cdh5.4.7)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346877#comment-16346877
 ] 

Hudson commented on YARN-7861:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13587 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13587/])
YARN-7861. [UI2] Logs page shows duplicated containers with ATS. (Sunil 
(wangda: rev 1453a04e92ce88b65995248c5d6a2bc934cbe65f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/logs.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/logs.js


> [UI2] Logs page shows duplicated containers with ATS
> 
>
> Key: YARN-7861
> URL: https://issues.apache.org/jira/browse/YARN-7861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7861.001.patch
>
>
> There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-31 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346953#comment-16346953
 ] 

Shane Kumpf commented on YARN-7815:
---

Thanks for all the discussion here!
{quote}I think that leaves us with this proposal which should accomplish that 
and remove one of the mounts being made today:

1. nm-local-dir/filecache mounted read-only for access to localized public files
2. nm-local-dir/usercache/_user_/filecache mounted read-only for access to 
localized user-private files
3. nm-local-dir/usercache/_user_/appcache/_applicationId_ mounted read-write 
for access to the application work area and underlying container working 
directory
{quote}
This is inline with my findings and I've got a patch mostly ready that 
implements this approach. However, I'm running into an issue where some jars 
need to be localized again. I'll post the patch or update the discussion once 
I've tracked down the cause of that issue.

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7842) PB changes to carry node-attributes in NM heartbeat

2018-01-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346747#comment-16346747
 ] 

Weiwei Yang commented on YARN-7842:
---

Per discussion with [~sunilg], we'll revisit if we need to keep 
NodeAttributesProto in RegisterNodeManagerResponseProto in YARN-7856, no need 
in this task. I have just committed this to the feature branch, thanks for the 
review [~sunilg], [~bibinchundatt]!

> PB changes to carry node-attributes in NM heartbeat
> ---
>
> Key: YARN-7842
> URL: https://issues.apache.org/jira/browse/YARN-7842
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7842-YARN-3409.001.patch, 
> YARN-7842-YARN-3409.002.patch
>
>
> PB changes to carry node-attributes in NM heartbeat. Split from a larger 
> patch for easier review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: screenshot-1.png

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346755#comment-16346755
 ] 

wangwj commented on YARN-7859:
--

In my cluster,I did an experiment.
 !screenshot-1.png! 

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Comment: was deleted

(was: In my cluster,I did an experiment.
 !screenshot-1.png! )

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346762#comment-16346762
 ] 

wangwj edited comment on YARN-7859 at 1/31/18 12:51 PM:


In my cluster,I did an experiment.
There are two queues in my cluster:
 !screenshot-1.png! 
And the configuration associated with this issue are : 
 !screenshot-3.png! 
And I run two jobs in each queue.
Of course,before the experiment, I add some log In the code.
After two jobs completed,I intercepted some of the logs...
>From the log we can see one queue has scheduled mandatory if the queue was not 
>scheduled within 3S.


was (Author: wangwj):
In my cluster,I did an experiment.
There are two queues in my cluster:
 !screenshot-2.png! 
And the configuration associated with this issue are : 
 !screenshot-3.png! 
And I run two jobs in each queue.
Of course,before the experiment, I add some log In the code.
After two jobs completed,I intercepted some of the logs...
>From the log we can see one queue has scheduled mandatory if the queue was not 
>scheduled within 3S.

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: (was: screenshot-2.png)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-31 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: log

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, log, screenshot-1.png, 
> screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-01-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346861#comment-16346861
 ] 

Wangda Tan commented on YARN-7738:
--

[~water] This is not an issue exists in 2.7.3, this is a 3.0/3.1 issue.

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7828) Clicking on yarn service should take to component tab

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346876#comment-16346876
 ] 

Hudson commented on YARN-7828:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13587 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13587/])
YARN-7828. Clicking on yarn service should take to component tab. (Sunil 
(wangda: rev 5ca4bf22dd45d9ac1328c74204413d90725d1405)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-component-instance.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-component-instances.js


> Clicking on yarn service should take to component tab
> -
>
> Key: YARN-7828
> URL: https://issues.apache.org/jira/browse/YARN-7828
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7828.001.patch
>
>
> Steps:
> 1) Enable ATS 2
> 2) Start Httpd yarn service
> 3) Go to UI2 Services tab
> 4) Click on yarn service
> This page redirects to Attempt-list tab
> However, it should be redirected to Components tab



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6592) Rich placement constraints in YARN

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6592:
--
Fix Version/s: 3.1.0

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6592) Rich placement constraints in YARN

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-6592:
-

Assignee: Arun Suresh

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Closed] (YARN-7792) Merge work for YARN-6592

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh closed YARN-7792.
-
Assignee: Sunil G

> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch, YARN-7792.004.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347250#comment-16347250
 ] 

Arun Suresh commented on YARN-7822:
---

Cherry-picked to trunk and deleted branch YARN-7812

> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7822-YARN-6592.001.patch, 
> YARN-7822-YARN-6592.002.patch, YARN-7822-YARN-6592.003.patch, 
> YARN-7822-YARN-6592.004.patch, YARN-7822-YARN-6592.005.patch, 
> YARN-7822-YARN-6592.006.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7822:
--
Target Version/s: 3.1.0
   Fix Version/s: (was: YARN-7812)
  3.1.0

> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7822-YARN-6592.001.patch, 
> YARN-7822-YARN-6592.002.patch, YARN-7822-YARN-6592.003.patch, 
> YARN-7822-YARN-6592.004.patch, YARN-7822-YARN-6592.005.patch, 
> YARN-7822-YARN-6592.006.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-31 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347175#comment-16347175
 ] 

Jim Brennan commented on YARN-7857:
---

Thanks [~miklos.szeg...@cloudera.com] for the detailed analysis!

I am not suggesting that we revert the fix from YARN-7796 - it clearly resolves 
the failure we were seeing and works in all cases.

The proposal in this Jira is to remove the {{-fstack-check}} flag because it 
has been shown to cause binary incompatibility issues, depending on the version 
of gcc a binary is compiled with and the OS it is run on.
{quote}As a conclusion, the stack check code seems to be legitimate. However, 
the code might address the same memory later ending up with the same crash 
without stack checking.
{quote}
I'm not sure I follow this? It sounds like you've shown that the size of the 
buffer matters - 110KB buffer compiled on RHEL 6 with {{-fstack-check}} works 
on RHEL 7 while it fails for a 128 KB buffer. But the 128 KB buffer works when 
compiled on RHEL 7 (with or without {{-fstack-check}}).   I agree that it may 
be that the RHEL 6 version of the stack checking code is tripping some kernel 
protection when the buffer is big enough.  The RHEL 7 version of the stack 
checking code does not.  That seems like an incompatibility.

My concern is that if we leave the {{-fstack-check}} flag there, some future 
change may cause a similar problem to the one we fixed in YARN-7796.

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347124#comment-16347124
 ] 

Hudson commented on YARN-7780:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7780. Documentation for Placement Constraints. (Konstantinos (arun suresh: 
rev 8df7666fe19f124e80bcc63c496607e085fcf804)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm


> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch, YARN-7780-YARN-6592.003.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347096#comment-16347096
 ] 

Hudson commented on YARN-6593:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos (arun 
suresh: rev 33a796d9b778bf7350e87a4e36ca30c925cf7036)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraintTransformations.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/PlacementConstraintToProtoConverter.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/PlacementConstraintFromProtoConverter.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPlacementConstraintPBConversion.java


> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7866) [UI2] Kerberizing the UI doesn't give any warning or content when UI is accessed without kinit

2018-01-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7866:
--
Description: 
Handle 401 error and show in UI

credit to [~ssath...@hortonworks.com] for finding  this issue.

  was:Handle 401 error and show in UI


> [UI2] Kerberizing the UI doesn't give any warning or content when UI is 
> accessed without kinit
> --
>
> Key: YARN-7866
> URL: https://issues.apache.org/jira/browse/YARN-7866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> Handle 401 error and show in UI
> credit to [~ssath...@hortonworks.com] for finding  this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-31 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347183#comment-16347183
 ] 

Eric Badger commented on YARN-7815:
---

{quote}I think that leaves us with this proposal which should accomplish that 
and remove one of the mounts being made today:

1. nm-local-dir/filecache mounted read-only for access to localized public files
2. nm-local-dir/usercache/_user_/filecache mounted read-only for access to 
localized user-private files
3. nm-local-dir/usercache/_user_/appcache/_applicationId_ mounted read-write 
for access to the application work area and underlying container working 
directory
{quote}
That approach sounds good to me

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347205#comment-16347205
 ] 

Sunil G commented on YARN-7862:
---

Thanks [~eyang]

I agree with the last point of sending WWW-Authenticate header etc. But for 
non-secure mode, now UI has to send with user.name option. Its not good to send 
this user.name= some dummy value always from UI in non-secure mode. Because 
after adding this also, server just skips. Since server skips this, it should 
not be marked as mandatory option for non-secure mode.

> YARN native service REST endpoint needs user.name as query param
> 
>
> Key: YARN-7862
> URL: https://issues.apache.org/jira/browse/YARN-7862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil G
>Priority: Major
>
> While accessing below yarn rest end point with POST method type,
> {code:java}
> http://rm_ip:8088/app/v1/services{code}
> below error is coming in non-secure cluster.
> {noformat}
> {
> "diagnostics": "Null user"
> }{noformat}
> When *user.name* is provided as query param with *dr.who* we can see that 
> yarn started service with proxy user, not dr.who. 
> In non-secure cluster, native service should ideally take the user from 
> remote ugi.
> in secure cluster, its better to derive user from kerberized shell.
>  
> cc/  [~jianhe] [~eyang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-31 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347189#comment-16347189
 ] 

Eric Yang edited comment on YARN-7862 at 1/31/18 5:32 PM:
--

Hi [~sunilg],

YARN Native service rest api already support hadoop delegation token, and it 
takes precedence over user.name parameter.  However, Hadoop does not have a 
single login form to validate username and password to issue delegation token.  
Without Knox, user can come into the system from various entry points using web 
browser.  This is the reason that user.name parameter is used to let server 
know who the end user should be in the absence of Knox to verify end user 
credential.  User.name is a stop gap solution for not having a SSO.  For the 
POST request, use URL that looks like this:

{code}
http://rm_ip:8088/app/v1/services?user.name=foobar
{code}

If you have obtained delegation token somehow, then you can forward the cookie 
to:

{code}
http://rm_ip:8088/app/v1/services
Set-Cookie: hadoop.auth=...; Path=/; Domain=example.com; HttpOnly
{code}

In Kerberos enabled cluster, you can submit the request with WWW-Authenticate 
header, and Kerberos ticket, and the request will work.


was (Author: eyang):
Hi [~sunilg],

YARN Native service rest api already support hadoop delegation token, and it 
takes precedence over user.name parameter.  However, Hadoop does not have a 
single login form to validate username and password to issue delegation token.  
Without Knox, user can come into the system from various entry points using web 
browser.  This is the reason that user.name parameter is used to let server 
know who the end user should be in the absence of Knox to verify end user 
credential.  User.name is a stop gap solution for not having a SSO.  For the 
POST request, use URL that looks like this:

{code}
http://rm_ip:8088/app/v1/services?user.name=foobar
{code}

If you have obtained delegation token somehow, then you can forward the cookie 
to:

{code}
http://rm_ip:8088/app/v1/services
{code}

In Kerberos enabled cluster, you can submit the request with WWW-Authenticate 
header, and Kerberos ticket, and the request will work.

> YARN native service REST endpoint needs user.name as query param
> 
>
> Key: YARN-7862
> URL: https://issues.apache.org/jira/browse/YARN-7862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil G
>Priority: Major
>
> While accessing below yarn rest end point with POST method type,
> {code:java}
> http://rm_ip:8088/app/v1/services{code}
> below error is coming in non-secure cluster.
> {noformat}
> {
> "diagnostics": "Null user"
> }{noformat}
> When *user.name* is provided as query param with *dr.who* we can see that 
> yarn started service with proxy user, not dr.who. 
> In non-secure cluster, native service should ideally take the user from 
> remote ugi.
> in secure cluster, its better to derive user from kerberized shell.
>  
> cc/  [~jianhe] [~eyang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Basic algorithm for constraint based placement

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7613:
--
Fix Version/s: 3.1.0

> Implement Basic algorithm for constraint based placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613-YARN-6592.006.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6592) Rich placement constraints in YARN

2018-01-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh resolved YARN-6592.
---
  Resolution: Fixed
Target Version/s: 3.1.0

All tasks have been completed. Merged the branch with trunk.
Thanks [~kkaranasos], [~leftnoteasy], [~pgaref], [~cheersyang] and [~sunilg] 
for all the effort here.
Thanks also to [~chris.douglas], [~subru], [~curino] and [~vinodkv] for the 
discussions.

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for trusted docker image

2018-01-31 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347152#comment-16347152
 ] 

Billie Rinaldi commented on YARN-7516:
--

I like the idea of having two prefix lists. That would give admins greater 
control over what is allowed to run.

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7866) [UI2] Kerberizing the UI doesn't give any warning or content when UI is accessed without kinit

2018-01-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7866:
--
Attachment: YARN-7866.001.patch

> [UI2] Kerberizing the UI doesn't give any warning or content when UI is 
> accessed without kinit
> --
>
> Key: YARN-7866
> URL: https://issues.apache.org/jira/browse/YARN-7866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7866.001.patch
>
>
> Handle 401 error and show in UI
> credit to [~ssath...@hortonworks.com] for finding  this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7866) [UI2] Kerberizing the UI doesn't give any warning or content when UI is accessed without kinit

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347186#comment-16347186
 ] 

Sunil G commented on YARN-7866:
---

[~Sreenath] could u pls review this.

> [UI2] Kerberizing the UI doesn't give any warning or content when UI is 
> accessed without kinit
> --
>
> Key: YARN-7866
> URL: https://issues.apache.org/jira/browse/YARN-7866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7866.001.patch
>
>
> Handle 401 error and show in UI
> credit to [~ssath...@hortonworks.com] for finding  this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347220#comment-16347220
 ] 

Sunil G commented on YARN-4606:
---

Thanks [~eepayne]. You are correct. 
 - {{activeUsers}}: users that have at least one active app
 - {{activeUsersOfPendingApps}}: users that have only pending apps

This is the correct definition.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-31 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-7862.
-
Resolution: Not A Bug

> YARN native service REST endpoint needs user.name as query param
> 
>
> Key: YARN-7862
> URL: https://issues.apache.org/jira/browse/YARN-7862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil G
>Priority: Major
>
> While accessing below yarn rest end point with POST method type,
> {code:java}
> http://rm_ip:8088/app/v1/services{code}
> below error is coming in non-secure cluster.
> {noformat}
> {
> "diagnostics": "Null user"
> }{noformat}
> When *user.name* is provided as query param with *dr.who* we can see that 
> yarn started service with proxy user, not dr.who. 
> In non-secure cluster, native service should ideally take the user from 
> remote ugi.
> in secure cluster, its better to derive user from kerberized shell.
>  
> cc/  [~jianhe] [~eyang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346992#comment-16346992
 ] 

Sunil G commented on YARN-7494:
---

Thanks [~leftnoteasy] [~cheersyang] [~Tao Yang] for comments.

Overall i ll summarize the suggestions including my thoughts
 # *multi-node-lookup* is enabled in cluster level now, we could also make it 
enable at application level and in other subsequent levels. I will make use of 
scheduling env's for this now. I think lets not add a new element in 
RegisterApplicationMasterRequest as it might need a bit of changes from 
Applications. Instead we can use the scheduling env added in other patch. Once 
we have the type from app (as app level OR queue OR cluster), we will pass this 
to a factory to get correct child class of {{CandidateNodeSet (Simple/Partition 
Based)}}
 ** Expose a node lookup SCOPE option from app in scheduling env as 
[SCOPE:APP/QUEUE/CLUSTER].
 ** SCOPE as APP level will enable multi-node-lookup-policy which will be 
explained in section2. SCOPE as QUEUE will fetch default config of 
multi-node-placement-enabled at each queue. CLUSTER means the value of 
yarn.capacity.scheduler.multi-node-placement-enabled.
 ** SCOPE enables option to lookup in multiple nodes. Given SCOPE as QUEUE, and 
at QUEUE level multi-node-lookup is disabled, then we will still look at one 
node at a time.
 # {{yarn.capacity.sorting-nodes.policy.class}} at cluster level/queue 
level/app level gives flexibility to choose correct node lookup policy given 
multi-node-placement-enabled is enabled in each level. So as [~cheersyang] 
mentioned, app can override queue level policy.
 #  Given we have the abstraction to select {{MultiNodePolicy}} , sorting 
optimization could be done at a central manager. I initially thought abt this 
to avoid computation cost, however had some concerns.

 ** Each time when a node is added/removed or capacity change happens, we need 
to always refresh the node set. Its not desirable to have a timer and refresh 
periodically as stale data for such a critical DS is not good design
 ** Number of nodes in a cluster always goes up, hence we may have some 
duplicated copy for each policy (given app level policy).
 # *[Proposal for #3]* Hence we can think abt an interim layer. We already have 
{{ClusterNodeTracker}} and {{NodeFilter}} interface. Hence we can query to this 
manager with any kind of Filter we need. 
 ## Now each MultiNodePolicy (NodeUsageBasedPolicy OR running container based 
etc)  will have a reference of original nodes retrieved from 
{{ClusterNodeTracker#getNodes}}(NodeFilter). A master {{map }} will be the  master cache. This cache will be invalidate 
on a node change event.
 ## Since we have a master cache, each app's MultiNodePolicy will just fetch 
the reference from master map (NodeUsageBasedPolicy will have its entry of 
nodes sorted in that mode)
 ## Invalidating cache is tricky. I ll improve  ClusterNodeTracker to register 
a call back to invalidate master cache.

[~cheersyang] [~leftnoteasy] [~Tao Yang], pls check this and share your 
thoughts. Once we have consensus i ll change my patch. Or if a call is needed, 
we can quickly plan that same also.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347161#comment-16347161
 ] 

Hudson commented on YARN-7822:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13590 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13590/])
YARN-7822. Constraint satisfaction checker support for composite OR and (arun 
suresh: rev d4813447831770446399f2d6501860141551ff33)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java


> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: YARN-7812
>
> Attachments: YARN-7822-YARN-6592.001.patch, 
> YARN-7822-YARN-6592.002.patch, YARN-7822-YARN-6592.003.patch, 
> YARN-7822-YARN-6592.004.patch, YARN-7822-YARN-6592.005.patch, 
> YARN-7822-YARN-6592.006.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-01-31 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347215#comment-16347215
 ] 

Eric Badger commented on YARN-7221:
---

bq. I'm not a huge fan of relying on sudo to provide the ACLs for YARN.
I'm not wild about this either, but I'm not sure if the alternative is better. 
I think the main question that needs to be asked is whether sudo access means 
privileged container access and vice versa. E.g. should a hypothetical user 
that doesn't have sudo access be allowed to run a privileged container. If the 
answer is no, then I would argue that creating these YARN ACLs is just 
reinventing Linux ACLs and is unnecessary overhead. However, if the answer is 
yes, then obviously we have to use an ACL system other than sudo. 

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7849) TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails due to heartbeat sync error

2018-01-31 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7849:
-
Summary: TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails 
due to heartbeat sync error  (was: 
TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization)

> TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails due to 
> heartbeat sync error
> --
>
> Key: YARN-7849
> URL: https://issues.apache.org/jira/browse/YARN-7849
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1, 3.0.1, 2.8.4
>Reporter: Jason Lowe
>Assignee: Botong Huang
>Priority: Major
>
> testUpdateNodeUtilization is failing.  From a branch-2.8 run:
> {noformat}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.013 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
> testUpdateNodeUtilization(org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization)
>   Time elapsed: 12.961 sec  <<< FAILURE!
> java.lang.AssertionError: Containers Utillization not propagated to RMNode 
> expected:<> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.verifySimulatedUtilization(TestMiniYarnClusterNodeUtilization.java:227)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-01-31 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347236#comment-16347236
 ] 

Manikandan R commented on YARN-4606:


Thanks [~eepayne] [~sunilg] for your comments.

{quote}And when we need to know all active users in cluster (for user-limit 
computation etc) we might need to use 
activeUsers+activeUsersOfPendingApps.\{quote}

Shouldn't we use activeUsersOfPendingApps only for user limit calculation? 
Otherwise, based on the example given in the description, Won't we end up in 
same situation (2+2 == 4 )? Please correct my understanding.

 

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7849) TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails due to heartbeat sync error

2018-01-31 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7849:
---
Attachment: YARN-7849.v1.patch

> TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails due to 
> heartbeat sync error
> --
>
> Key: YARN-7849
> URL: https://issues.apache.org/jira/browse/YARN-7849
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1, 3.0.1, 2.8.4
>Reporter: Jason Lowe
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7849.v1.patch
>
>
> testUpdateNodeUtilization is failing.  From a branch-2.8 run:
> {noformat}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.013 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
> testUpdateNodeUtilization(org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization)
>   Time elapsed: 12.961 sec  <<< FAILURE!
> java.lang.AssertionError: Containers Utillization not propagated to RMNode 
> expected:<> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.verifySimulatedUtilization(TestMiniYarnClusterNodeUtilization.java:227)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347030#comment-16347030
 ] 

Jason Lowe commented on YARN-2185:
--

+1 lgtm.  Committing this.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch, 
> YARN-2185.013.patch, YARN-2185.014.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347205#comment-16347205
 ] 

Sunil G edited comment on YARN-7862 at 1/31/18 5:17 PM:


Thanks [~eyang]

I agree with the last point of sending WWW-Authenticate header etc. But for 
non-secure mode, now UI has to send with user.name option. Its not good to send 
this user.name=dummy value always from UI in non-secure mode. Because after 
adding this also, server just skips user.name value. Since server skips this, i 
think it should not be marked as mandatory option for non-secure mode.


was (Author: sunilg):
Thanks [~eyang]

I agree with the last point of sending WWW-Authenticate header etc. But for 
non-secure mode, now UI has to send with user.name option. Its not good to send 
this user.name= some dummy value always from UI in non-secure mode. Because 
after adding this also, server just skips. Since server skips this, it should 
not be marked as mandatory option for non-secure mode.

> YARN native service REST endpoint needs user.name as query param
> 
>
> Key: YARN-7862
> URL: https://issues.apache.org/jira/browse/YARN-7862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil G
>Priority: Major
>
> While accessing below yarn rest end point with POST method type,
> {code:java}
> http://rm_ip:8088/app/v1/services{code}
> below error is coming in non-secure cluster.
> {noformat}
> {
> "diagnostics": "Null user"
> }{noformat}
> When *user.name* is provided as query param with *dr.who* we can see that 
> yarn started service with proxy user, not dr.who. 
> In non-secure cluster, native service should ideally take the user from 
> remote ugi.
> in secure cluster, its better to derive user from kerberized shell.
>  
> cc/  [~jianhe] [~eyang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-31 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347248#comment-16347248
 ] 

Eric Yang commented on YARN-7862:
-

[~sunilg] I don't think we ever said user.name is mandatory.  In the absence of 
external authenticators and delegation token, then user.name is required.  The 
UI must display 401 Unauthorized challenge to prevent information leak to 
anonymous user.

> YARN native service REST endpoint needs user.name as query param
> 
>
> Key: YARN-7862
> URL: https://issues.apache.org/jira/browse/YARN-7862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil G
>Priority: Major
>
> While accessing below yarn rest end point with POST method type,
> {code:java}
> http://rm_ip:8088/app/v1/services{code}
> below error is coming in non-secure cluster.
> {noformat}
> {
> "diagnostics": "Null user"
> }{noformat}
> When *user.name* is provided as query param with *dr.who* we can see that 
> yarn started service with proxy user, not dr.who. 
> In non-secure cluster, native service should ideally take the user from 
> remote ugi.
> in secure cluster, its better to derive user from kerberized shell.
>  
> cc/  [~jianhe] [~eyang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7788) Factor out management of temp tags from AllocationTagsManager

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347116#comment-16347116
 ] 

Hudson commented on YARN-7788:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7788. Factor out management of temp tags from (arun suresh: rev 
adbe87abf8b2814e0e2988d09ef8a8569190c80e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestAllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/LocalAllocationTagsManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/TestLocalAllocationTagsManager.java


> Factor out management of temp tags from AllocationTagsManager
> -
>
> Key: YARN-7788
> URL: https://issues.apache.org/jira/browse/YARN-7788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7788-YARN-6592.001.patch, 
> YARN-7788-YARN-6592.002.patch, YARN-7788-YARN-6592.003.patch
>
>
> Instead of using the AllocationTagsManager to store the temp tags that get 
> generated when placing containers with constraints, we will use a 
> LocalAllocationtagsManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7745) Allow DistributedShell to take a placement specification for containers it wants to launch

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347113#comment-16347113
 ] 

Hudson commented on YARN-7745:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7745. Allow DistributedShell to take a placement specification for (arun 
suresh: rev e60f51299dba360d13aa39f9ab714fdfc666b532)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/PlacementSpec.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java


> Allow DistributedShell to take a placement specification for containers it 
> wants to launch
> --
>
> Key: YARN-7745
> URL: https://issues.apache.org/jira/browse/YARN-7745
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7745-YARN-6592.001.patch
>
>
> This is add a '-placement_spec' option to the distributed shell client. Where 
> the user can specify a string-ified specification for how it wants containers 
> to be placed.
> For eg:
> {noformat}
> $ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
> $YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
>  -shell_command sleep -shell_args 10 -placement_spec 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347099#comment-16347099
 ] 

Hudson commented on YARN-7448:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7448. [API] Add SchedulingRequest to the AllocateRequest. (arun suresh: 
rev 69de9a1ba9a587c7e03ae7c7aeae93e04c36d665)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/SchedulingRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceSizingPBImpl.java


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canSatisfyConstraints utility function to validate a placement against a constraint

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347107#comment-16347107
 ] 

Hudson commented on YARN-7682:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7682. Expose canSatisfyConstraints utility function to validate a (arun 
suresh: rev bdba01f73b58d2228e808c6f61377f101b6bac1c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java


> Expose canSatisfyConstraints utility function to validate a placement against 
> a constraint
> --
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682-YARN-6592.004.patch, YARN-7682-YARN-6592.005.patch, 
> YARN-7682-YARN-6592.006.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Fix Cluster metrics when placement processor is enabled

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347122#comment-16347122
 ] 

Hudson commented on YARN-7784:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7784. Fix Cluster metrics when placement processor is enabled. (arun 
suresh: rev f8c5f5b23732a1e35f012c1a6850bed09c8a5180)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java


> Fix Cluster metrics when placement processor is enabled
> ---
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347115#comment-16347115
 ] 

Hudson commented on YARN-7763:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7763. Allow Constraints specified in the SchedulingRequest to (arun 
suresh: rev 8bf7c444368f48f63f8011cf155f551c6b51ee21)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java


> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch, 
> YARN-7763-YARN-6592.006.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) API and interface modifications for placement constraint processor

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347102#comment-16347102
 ] 

Hudson commented on YARN-7669:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7669. API and interface modifications for placement constraint (arun 
suresh: rev 06eb63e64b05e2e8bb8a76c15360ab0495f11317)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/api/PlacedSchedulingRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/ams/ApplicationMasterServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/InvalidAllocationTagsQueryException.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/api/ConstraintPlacementAlgorithmOutputCollector.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/AllocationTagsNamespaces.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/InvalidAllocationTagsQueryException.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsNamespaces.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/RejectionReason.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/SchedulingRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/api/ConstraintPlacementAlgorithm.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/api/SchedulingResponse.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/api/package-info.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/TestAllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceSizingPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (add) 

[jira] [Commented] (YARN-7779) Display allocation tags in RM web UI and expose same through REST API

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347117#comment-16347117
 ] 

Hudson commented on YARN-7779:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7779. Display allocation tags in RM web UI and expose same through (arun 
suresh: rev 9b81cb0537e5b731581e6a375bf0a59abf61c359)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeInfo.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AllocationTagInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AllocationTagsInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestNodesPage.java


> Display allocation tags in RM web UI and expose same through REST API
> -
>
> Key: YARN-7779
> URL: https://issues.apache.org/jira/browse/YARN-7779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7779-YARN-6592.001.patch, 
> YARN-7779-YARN-6592.002.patch, YARN-7779-YARN-6592.003.patch, 
> YARN-7779-YARN-6592.004.patch, YARN-7779-YARN-6592.005.patch, 
> allocationTags_nodesPage.png, allocationTags_nodesREST.png
>
>
> Propose to display node allocation tags on RM. This will users to check 
> allocations w.r.t the tags. It would be good to expose node allocation tags 
> from:  
>  * Web UI: {{http:///cluster/nodes}}
>  * REST API: {{http:///ws/v1/cluster/nodes}}, 
> {{http:///ws/v1/cluster/node/}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7681) Double-check placement constraints in scheduling phase before actual allocation is made

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347108#comment-16347108
 ] 

Hudson commented on YARN-7681:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7681. Double-check placement constraints in scheduling phase before (arun 
suresh: rev 4eda58c13641c14c4b248843a2589781cbcd343f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> Double-check placement constraints in scheduling phase before actual 
> allocation is made
> ---
>
> Key: YARN-7681
> URL: https://issues.apache.org/jira/browse/YARN-7681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM, scheduler
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7681-YARN-6592.001.patch, 
> YARN-7681-YARN-6592.002.patch, YARN-7681-YARN-6592.003.patch
>
>
> This JIRA is created based on the discussions under YARN-7612, see comments 
> after [this 
> comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16303051=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16303051].
>  AllocationTagsManager maintains tags info that helps to make placement 
> decisions at placement phase, however tags are changing along with 
> container's lifecycle, so it is possible that the placement violates the 
> constraints at the scheduling phase. Propose to add an extra check in the 
> scheduler to make sure constraints are still satisfied during the actual 
> allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7709) Remove SELF from TargetExpression type

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347111#comment-16347111
 ] 

Hudson commented on YARN-7709:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7709. Remove SELF from TargetExpression type. (Konstantinos (arun suresh: 
rev 8779a35742085fadddccc21342b55d4f17fae5c2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraintTransformations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java


> Remove SELF from TargetExpression type
> --
>
> Key: YARN-7709
> URL: https://issues.apache.org/jira/browse/YARN-7709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7709-YARN-6592.001.patch
>
>
> As mentioned by [~asuresh], SELF means target allocation tag same as 
> allocation tag of the scheduling request itself. So this is not a new type 
> for sure, it is still ALLOCATION_TAG type.
> If we really want this functionality, we can build this in 
> PlacementConstraints, but I'm doubtful about this since copying allocation 
> tags from source is just a trivial work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6595) [API] Add Placement Constraints at the application level

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347098#comment-16347098
 ] 

Hudson commented on YARN-6595:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6595. [API] Add Placement Constraints at the application level. (arun 
suresh: rev db928556c81e5950b3fe374fa5b99ab26791ef3a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto


> [API] Add Placement Constraints at the application level
> 
>
> Key: YARN-6595
> URL: https://issues.apache.org/jira/browse/YARN-6595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6595-YARN-6592.001.patch, 
> YARN-6595-YARN-6592.002.patch, YARN-6595-YARN-6592.003.patch, 
> YARN-6595-YARN-6592.004.patch, YARN-6595-YARN-6592.005.patch
>
>
> This JIRA allows placement constraints to be specified at the application 
> level.
> This will be used for placement constraints between different components of 
> the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347109#comment-16347109
 ] 

Hudson commented on YARN-7696:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7696. Add container tags to ContainerTokenIdentifier, api.Container (arun 
suresh: rev a5c1fc881e21ebf43da7ead5f3852808fce25492)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NMContainerStatusPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMContainerTokenSecretManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Container.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/yarn_security_token.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java


> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch, YARN-7696-YARN-6592.003.patch, 
> YARN-7696-YARN-6592.004.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7795) Fix jenkins issues of YARN-6592 branch

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347120#comment-16347120
 ] 

Hudson commented on YARN-7795:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7795. Fix jenkins issues of YARN-6592 branch. (Sunil G via asuresh) (arun 
suresh: rev c23980c4f2cf4c751a99fd310e60149cb32ea7c7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSchedulingRequestUpdate.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/SchedulingRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocationAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceSizingPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/AppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java


> Fix jenkins issues of YARN-6592 branch
> --
>
> Key: YARN-7795
> URL: https://issues.apache.org/jira/browse/YARN-7795
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7795-YARN-6592.001.patch
>
>
> Refer link . Also fix the javadoc errors as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support anti-affinity constraint via AppPlacementAllocator

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347112#comment-16347112
 ] 

Hudson commented on YARN-6599:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6599. Support anti-affinity constraint via AppPlacementAllocator. (arun 
suresh: rev 38af23796971193fa529c3d08ffde8fcd6e607b6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAutoQueueCreation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ContainerUpdateContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/PendingAsk.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/AppPlacementAllocator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/SchedulerInvalidResoureRequestException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/SchedulerRequestKey.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/TestSingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSchedulingRequestUpdate.java
* (edit) 

[jira] [Commented] (YARN-7653) Rack cardinality support for AllocationTagsManager

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347103#comment-16347103
 ] 

Hudson commented on YARN-7653:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7653. Node group support for AllocationTagsManager. (Panagiotis (arun 
suresh: rev 37f1a7b64fcc93191367330cd59d4d71d7b29ac7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestAllocationTagsManager.java


> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7783) Add validation step to ensure constraints are not violated due to order in which a request is processed

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347118#comment-16347118
 ] 

Hudson commented on YARN-7783:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7783. Add validation step to ensure constraints are not violated (arun 
suresh: rev a4c539fcdba817e313b2375abf2c4c9a1d13a4fd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java


> Add validation step to ensure constraints are not violated due to order in 
> which a request is processed
> ---
>
> Key: YARN-7783
> URL: https://issues.apache.org/jira/browse/YARN-7783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7783-YARN-6592.001.patch, 
> YARN-7783-YARN-6592.002.patch, YARN-7783-YARN-6592.003.patch, 
> YARN-7783-YARN-6592.004.patch
>
>
> When the algorithm has placed a container on a node, allocation tags are 
> added to the node if the constraint is satisfied, But depending on the order 
> in which the algorithm sees the request, it is possible that a constraint 
> that happen to be valid during placement of an earlier-seen request, might 
> not be valid after all subsequent requests have been placed.
> For eg:
> Assume nodes n1, n2, n3, n4 and n5
> Consider the 2 constraints:
> # *foo* -> anti-affinity with *foo*
> # *bar* -> anti-affinity with *foo*
> And 2 requests
> # req1: NumAllocations = 4, allocTags = [foo]
> # req2: NumAllocations = 1, allocTags = [bar]
> If *req1* is seen first, the algorithm can place the 4 containers in n1, n2, 
> n3 and n4. And when it gets to *req2*, it will see that 4 nodes have the 
> *foo* tag and will place it on n5. But if *req2* is seen first, then *bar* 
> tag will be placed on any node, since no node will at that point have *foo*, 
> and then when it gets to *req1*, since *foo* has no anti-affinity with *bar*, 
> the algorithm can end up placing *foo* on a node with *bar* violating the 
> second constraint.
> To prevent the above, we need a validation step: after the placements for a 
> batch of requests are made, then for each req, we remove its tags from the 
> node and try to see of constraints are still satisfied if the tag were to be 
> added back on the node.
> When applied to the example above, after the algorithm has run through *req2* 
> and then *req1*, we remove the *bar* tag from the node and try to add it back 
> on the node. This time, constraint satisfaction will fail, since there is now 
> a *foo* tag on the node and *bar* cannot be added. The algorithm will then 
> retry placing *req2* on another node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6592) Rich placement constraints in YARN

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347121#comment-16347121
 ] 

Hudson commented on YARN-6592:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7795. Fix jenkins issues of YARN-6592 branch. (Sunil G via asuresh) (arun 
suresh: rev c23980c4f2cf4c751a99fd310e60149cb32ea7c7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceSizingPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSchedulingRequestUpdate.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/AppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/SchedulingRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocationAsync.java


> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347101#comment-16347101
 ] 

Hudson commented on YARN-7670:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7670. Modifications to the ResourceScheduler API to support (arun suresh: 
rev 88d8d3f40b2923fab23a933bce1cd2e9c320ae84)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/ResourceAllocationCommitter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347100#comment-16347100
 ] 

Hudson commented on YARN-7522:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7522. Introduce AllocationTagsManager to associate allocation tags (arun 
suresh: rev 801c0988b5ad1eff1e896a2635c2937721c96b04)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/TestAllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/InvalidAllocationTagsQueryException.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/AllocationTagsNamespaces.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java


> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Processor Framework for Rich Placement Constraints

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347105#comment-16347105
 ] 

Hudson commented on YARN-7612:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7612. Add Processor Framework for Rich Placement Constraints. (arun 
suresh: rev f9af15d659f59fd0cf564fe1ecc8e06c6429ba68)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/BatchedRequests.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/NodeCandidateSelector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/PlacementProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/PlacementDispatcher.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SamplePlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java


> Add Processor Framework for Rich Placement Constraints
> --
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347110#comment-16347110
 ] 

Hudson commented on YARN-6619:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6619. AMRMClient Changes to use the PlacementConstraint and (arun suresh: 
rev 29d9e4d5814900d5c59d77fe05d32186d4ad9385)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientPlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/BaseAMRMClientTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java


> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347097#comment-16347097
 ] 

Hudson commented on YARN-6594:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos (arun 
suresh: rev b57e8bc3002a95d2f2f328554d792151cdc1120d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/SchedulingRequestPBImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceSizingPBImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java


> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6594-YARN-6592.002.patch, YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Add RMContainer recovery test to verify tag population in the AllocationTagsManager

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347123#comment-16347123
 ] 

Hudson commented on YARN-6597:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6597. Add RMContainer recovery test to verify tag population in the (arun 
suresh: rev add993e26a3c96f77dfd42086f186a139966019e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java


> Add RMContainer recovery test to verify tag population in the 
> AllocationTagsManager
> ---
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7613) Implement Basic algorithm for constraint based placement

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347106#comment-16347106
 ] 

Hudson commented on YARN-7613:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7613. Implement Basic algorithm for constraint based placement. (arun 
suresh: rev a52d11fb8c103f14e42692600a058ba3b56e2ecf)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestBatchedRequestsIterators.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/iterators/SerialIterator.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SamplePlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/PlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/BatchedRequests.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestAllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/iterators/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/iterators/PopularTagsIterator.java


> Implement Basic algorithm for constraint based placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613-YARN-6592.006.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347104#comment-16347104
 ] 

Hudson commented on YARN-6596:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-6596. Introduce Placement Constraint Manager module. (Konstantinos (arun 
suresh: rev 1efb2b6f250022f41fe5911c1bb3028ec15c5447)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintManagerService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintManagerService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java


> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347114#comment-16347114
 ] 

Hudson commented on YARN-7774:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7774. Miscellaneous fixes to the PlacementProcessor. (asuresh) (arun 
suresh: rev 28fe7f331837b36e78fa34ed990993677dddeaee)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/TestSingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/BatchedRequests.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/CircularIterator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/TestCircularIterator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java


> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7807) Assume intra-app anti-affinity as default for scheduling request inside AppPlacementAllocator

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347119#comment-16347119
 ] 

Hudson commented on YARN-7807:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13589/])
YARN-7807. Assume intra-app anti-affinity as default for scheduling (arun 
suresh: rev 644afe5fd800ac4f2b873a99f9b3868c3a8c5c40)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java


> Assume intra-app anti-affinity as default for scheduling request inside 
> AppPlacementAllocator
> -
>
> Key: YARN-7807
> URL: https://issues.apache.org/jira/browse/YARN-7807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7807-YARN-6592.001.patch
>
>
> See discussion on: 
> https://issues.apache.org/jira/browse/YARN-7791?focusedCommentId=16336857=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16336857
> We need to make changes to AppPlacementAllocator to treat default target 
> allocation tags is for intra-app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3153) Capacity Scheduler max AM resource limit for queues is defined as percentage but used as ratio

2018-01-31 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347144#comment-16347144
 ] 

Manikandan R commented on YARN-3153:


[~leftnoteasy] [~sunilg]

I am interested in working on this. I like proposal mentioned in 
https://issues.apache.org/jira/browse/YARN-3153?focusedCommentId=14317255=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14317255
 except the name. Instead of 
{{yarn.scheduler.capacity.maximum-am-capacity-per-queue}}, we can use 
{{yarn.scheduler.capacity.maximum-am-capacity}} like mentioned in [~cwelch] 
comment as anyway this property is meant for queue only. 

Reg inheritance, We can make it similar to "disable_preemption" property. 

1. For "root" queue, default value could be 
DEFAULT_MAXIMUM_APPLICATIONMASTERS_RESOURCE_CAPACITY and can be used if value 
is not configured at root level.
2. Whereas, for "leaf"' queues, default value could be o/p of #1 and can be 
used if value is not configured at leaf queue level.

Please share your thoughts.

> Capacity Scheduler max AM resource limit for queues is defined as percentage 
> but used as ratio
> --
>
> Key: YARN-3153
> URL: https://issues.apache.org/jira/browse/YARN-3153
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
>
> In existing Capacity Scheduler, it can limit max applications running within 
> a queue. The config is yarn.scheduler.capacity.maximum-am-resource-percent, 
> but actually, it is used as "ratio", in implementation, it assumes input will 
> be \[0,1\]. So now user can specify it up to 100, which makes AM can use 100x 
> of queue capacity. We should fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-01-31 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347209#comment-16347209
 ] 

Eric Payne commented on YARN-4606:
--

Thanks everyone for the thoughtful analysis.

I am still analyzing in more depth, but I have a couple of thoughts:
{quote}this is a (known) potential issue of fair ordering policy.
{quote}
This can happen for fifo ordering policy as well.
{quote}have {{activeUsersOfPendingApps}} along with {{activeUsers}}. Hence in 
case of scheduling we can depend only on {{activeUse}}
{quote}
We need to be careful with these counts because a user can have both active and 
pending apps. I think the definitions should be:
 - {{activeUsers}}: users that have at least one active app
 - {{activeUsersOfPendingApps}}: users that have only pending apps.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   >