[jira] [Updated] (YARN-6592) [Umbrella] Rich placement constraints in YARN

2020-01-22 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6592:
-
Attachment: (was: [YARN-7812] Improvements to Rich Placement 
Constraints in YARN - ASF JIRA.pdf)

> [Umbrella] Rich placement constraints in YARN
> -
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6592) [Umbrella] Rich placement constraints in YARN

2020-01-22 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6592:
-
Attachment: (was: [YARN-5468] Scheduling of long-running applications - 
ASF JIRA.pdf)

> [Umbrella] Rich placement constraints in YARN
> -
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6592) [Umbrella] Rich placement constraints in YARN

2020-01-14 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6592:
-
Attachment: [YARN-5468] Scheduling of long-running applications - ASF 
JIRA.pdf

> [Umbrella] Rich placement constraints in YARN
> -
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf, 
> [YARN-5468] Scheduling of long-running applications - ASF JIRA.pdf, 
> [YARN-7812] Improvements to Rich Placement Constraints in YARN - ASF JIRA.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6592) [Umbrella] Rich placement constraints in YARN

2020-01-14 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6592:
-
Attachment: [YARN-7812] Improvements to Rich Placement Constraints in YARN 
- ASF JIRA.pdf

> [Umbrella] Rich placement constraints in YARN
> -
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf, 
> [YARN-5468] Scheduling of long-running applications - ASF JIRA.pdf, 
> [YARN-7812] Improvements to Rich Placement Constraints in YARN - ASF JIRA.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7839) Check node capacity before placing in the Algorithm

2018-02-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350496#comment-16350496
 ] 

Panagiotis Garefalakis edited comment on YARN-7839 at 2/2/18 3:13 PM:
--

 

Submitting a simple patch tracking available cluster resources in the 
DefaultPlacement algorithm - to support capacity check before placement.

The actual check is part of the attemptPlacementOnNode method which could be 
configured with the *ignoreResourceCheck* flag.

In the current patch the check is enabled on placement step and disabled on the 
validation step.

A wrapper class *SchedulingRequestWithPlacementAttempt* was also introduced to 
keep track of the failed attempts on the rejected SchedulingRequests.

 

Thoughts?  [~asuresh] [~kkaranasos] [~cheersyang] 


was (Author: pgaref):
 

Submitting a simple patch tracking available cluster resources in the 
DefaultPlacement algorithm - to support capacity check before placement.

The actual check is part of the attemptPlacementOnNode method which could be 
configured with the **ignoreResourceCheck** flag.

In the current patch the check is enabled on placement step and disabled on the 
validation step.

A wrapper class SchedulingRequestWithPlacementAttempt was also introduced to 
keep track of the failed attempts on the rejected SchedulingRequests.

 

Thoughts?  [~asuresh] [~kkaranasos] [~cheersyang] 

> Check node capacity before placing in the Algorithm
> ---
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-7839-YARN-6592.001.patch
>
>
> Currently, the Algorithm assigns a node to a request purely based on if the 
> constraints are met. It is later in the scheduling phase that the Queue 
> capacity and Node capacity are checked. If the request cannot be placed 
> because of unavailable Queue/Node capacity, the request is retried by the 
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries 
> if we perform the Node capacity check in the Algorithm as well. The Queue 
> capacity check and the other user limit checks can still be handled by the 
> scheduler (since queues and other limits are tied to the scheduler, and not 
> scheduler agnostic)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7839) Check node capacity before placing in the Algorithm

2018-02-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350496#comment-16350496
 ] 

Panagiotis Garefalakis commented on YARN-7839:
--

 

Submitting a simple patch tracking available cluster resources in the 
DefaultPlacement algorithm - to support capacity check before placement.

The actual check is part of the attemptPlacementOnNode method which could be 
configured with the **ignoreResourceCheck** flag.

In the current patch the check is enabled on placement step and disabled on the 
validation step.

A wrapper class SchedulingRequestWithPlacementAttempt was also introduced to 
keep track of the failed attempts on the rejected SchedulingRequests.

 

Thoughts?  [~asuresh] [~kkaranasos] [~cheersyang] 

> Check node capacity before placing in the Algorithm
> ---
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-7839-YARN-6592.001.patch
>
>
> Currently, the Algorithm assigns a node to a request purely based on if the 
> constraints are met. It is later in the scheduling phase that the Queue 
> capacity and Node capacity are checked. If the request cannot be placed 
> because of unavailable Queue/Node capacity, the request is retried by the 
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries 
> if we perform the Node capacity check in the Algorithm as well. The Queue 
> capacity check and the other user limit checks can still be handled by the 
> scheduler (since queues and other limits are tied to the scheduler, and not 
> scheduler agnostic)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7839) Check node capacity before placing in the Algorithm

2018-02-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7839:
-
Attachment: YARN-7839-YARN-6592.001.patch

> Check node capacity before placing in the Algorithm
> ---
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-7839-YARN-6592.001.patch
>
>
> Currently, the Algorithm assigns a node to a request purely based on if the 
> constraints are met. It is later in the scheduling phase that the Queue 
> capacity and Node capacity are checked. If the request cannot be placed 
> because of unavailable Queue/Node capacity, the request is retried by the 
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries 
> if we perform the Node capacity check in the Algorithm as well. The Queue 
> capacity check and the other user limit checks can still be handled by the 
> scheduler (since queues and other limits are tied to the scheduler, and not 
> scheduler agnostic)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7839) Check node capacity before placing in the Algorithm

2018-02-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-7839:


Assignee: Panagiotis Garefalakis

> Check node capacity before placing in the Algorithm
> ---
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
>Priority: Major
>
> Currently, the Algorithm assigns a node to a request purely based on if the 
> constraints are met. It is later in the scheduling phase that the Queue 
> capacity and Node capacity are checked. If the request cannot be placed 
> because of unavailable Queue/Node capacity, the request is retried by the 
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries 
> if we perform the Node capacity check in the Algorithm as well. The Queue 
> capacity check and the other user limit checks can still be handled by the 
> scheduler (since queues and other limits are tied to the scheduler, and not 
> scheduler agnostic)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7839) Check node capacity before placing in the Algorithm

2018-02-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7839:
-
Description: 
Currently, the Algorithm assigns a node to a request purely based on if the 
constraints are met. It is later in the scheduling phase that the Queue 
capacity and Node capacity are checked. If the request cannot be placed because 
of unavailable Queue/Node capacity, the request is retried by the Algorithm.

For clusters that are running at high utilization, we can reduce the retries if 
we perform the Node capacity check in the Algorithm as well. The Queue capacity 
check and the other user limit checks can still be handled by the scheduler 
(since queues and other limits are tied to the scheduler, and not scheduler 
agnostic)

  was:
Currently, the Algorithm assigns a node to a requests purely based on if the 
constraints are met. It is later in the scheduling phase that the Queue 
capacity and Node capacity are checked. If the request cannot be placed because 
of unavailable Queue/Node capacity, the request is retried by the Algorithm.

For clusters that are running at high utilization, we can reduce the retries if 
we perform the Node capacity check in the Algorithm as well. The Queue capacity 
check and the other user limit checks can still be handled by the scheduler 
(since queues and other limits are tied to the scheduler, and not scheduler 
agnostic)


> Check node capacity before placing in the Algorithm
> ---
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> Currently, the Algorithm assigns a node to a request purely based on if the 
> constraints are met. It is later in the scheduling phase that the Queue 
> capacity and Node capacity are checked. If the request cannot be placed 
> because of unavailable Queue/Node capacity, the request is retried by the 
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries 
> if we perform the Node capacity check in the Algorithm as well. The Queue 
> capacity check and the other user limit checks can still be handled by the 
> scheduler (since queues and other limits are tied to the scheduler, and not 
> scheduler agnostic)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Wrapping up allocationTags support under RMContainer state transitions

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6597:
-
Summary: Wrapping up allocationTags support under RMContainer state 
transitions  (was: Store and update allocation tags in the Placement Constraint 
Manager)

> Wrapping up allocationTags support under RMContainer state transitions
> --
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339618#comment-16339618
 ] 

Panagiotis Garefalakis commented on YARN-6597:
--

Currently all RMContainer transitions are taking care of their allocation Tags 
- keeping allocationTags manager up to date.

RECOVER transition was not covered in the tests so I added an extra case and 
fixed some typos.

[~asuresh] [~cheersyang] please take a look

> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6597:
-
Attachment: (was: YARN-6597-YARN-6592.001.patch)

> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6597:
-
Attachment: YARN-6597-YARN-6592.001.patch

> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6597:
-
Attachment: YARN-6597-YARN-6592.001.patch

> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7681) Scheduler should double-check placement constraint before actual allocation is made

2018-01-09 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7681:
-
Attachment: YARN-7681-YARN-6592.001.patch

Reattaching patch using correct branch name

> Scheduler should double-check placement constraint before actual allocation 
> is made
> ---
>
> Key: YARN-7681
> URL: https://issues.apache.org/jira/browse/YARN-7681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM, scheduler
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7681-YARN-6592.001.patch, YARN-7681.001.patch
>
>
> This JIRA is created based on the discussions under YARN-7612, see comments 
> after [this 
> comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16303051=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16303051].
>  AllocationTagsManager maintains tags info that helps to make placement 
> decisions at placement phase, however tags are changing along with 
> container's lifecycle, so it is possible that the placement violates the 
> constraints at the scheduling phase. Propose to add an extra check in the 
> scheduler to make sure constraints are still satisfied during the actual 
> allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7681) Scheduler should double-check placement constraint before actual allocation is made

2018-01-09 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16318896#comment-16318896
 ] 

Panagiotis Garefalakis commented on YARN-7681:
--

Looks good to me as well. Thanks [~cheersyang]

> Scheduler should double-check placement constraint before actual allocation 
> is made
> ---
>
> Key: YARN-7681
> URL: https://issues.apache.org/jira/browse/YARN-7681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM, scheduler
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7681.001.patch
>
>
> This JIRA is created based on the discussions under YARN-7612, see comments 
> after [this 
> comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16303051=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16303051].
>  AllocationTagsManager maintains tags info that helps to make placement 
> decisions at placement phase, however tags are changing along with 
> container's lifecycle, so it is possible that the placement violates the 
> constraints at the scheduling phase. Propose to add an extra check in the 
> scheduler to make sure constraints are still satisfied during the actual 
> allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-05 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313879#comment-16313879
 ] 

Panagiotis Garefalakis edited comment on YARN-6619 at 1/5/18 9:06 PM:
--

[~asuresh] not at all, it depends on YARN-7696 so it makes sense.


was (Author: pgaref):
[~asuresh] not at all, it relates with YARN-7696 so it makes sense.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-05 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313879#comment-16313879
 ] 

Panagiotis Garefalakis commented on YARN-6619:
--

[~asuresh] not at all, it relates with YARN-7696 so it makes sense.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-03 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16309790#comment-16309790
 ] 

Panagiotis Garefalakis commented on YARN-7682:
--

A temporary maven dependency error made jenkins patch v004 and v005 fail.
Resolved on v006 (even though patches are identical).

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682-YARN-6592.004.patch, YARN-7682-YARN-6592.005.patch, 
> YARN-7682-YARN-6592.006.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-03 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.006.patch

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682-YARN-6592.004.patch, YARN-7682-YARN-6592.005.patch, 
> YARN-7682-YARN-6592.006.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-03 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.005.patch

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682-YARN-6592.004.patch, YARN-7682-YARN-6592.005.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-03 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.004.patch

[~kkaranasos] Thanks for the comments!

bq. I think that the functions you push in the getNodeCardinalityByOp should be 
reversed

Agreed, its safer to use the max operator for the minScopeCardility and the min 
for the maxScopeCardinality.

bq. Do we need the line right after the comment “// Make sure Anti-affinity 
satisfies hard upper limit”?

We actually do because antiAffinity is the only case we need equality min=0 and 
max=0.
In the rest of the cases, max is the upper limit i.e. less than 5 containers in 
the scope. 
This above line allows us to use the same check for all constraints:

{code:java}
minScopeCardinality >= sc.getMinCardinality()
&& maxScopeCardinality < sc.getMaxCardinality()
{code}

Also Including more detailed javadocs in the latest patch v004


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682-YARN-6592.004.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.003.patch

[~asuresh] Thanks for the comments!
Attaching v003 of the patch.
Also added TestPlacementConstraintsUtil class testing the canSatisfyConstraints 
method in isolation as discussed.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Rack cardinality support for AllocationTagsManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16308092#comment-16308092
 ] 

Panagiotis Garefalakis commented on YARN-7653:
--

Hello [~leftnoteasy],  I agree with the title change.
Regarding the node group support: 
* in the discussion above we agreed that we need to at least support Rack as it 
already defined in our API
* in the committed patch the CountedTags inner class is generic with the goal 
to support any arbitrary node group. The only thing we would add is an extra 
data structure keeping a group to CountedTags mapping (in that scenario RACK 
would be just a specific node group)
* to keep things simple since we dont have arbitrary groups so far this extra 
mapping is not there - as we would also need a way to define/add/remove node 
groups -  but I would be happy to work on that if we want to support it



> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Fix For: YARN-6592
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16308067#comment-16308067
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 1/2/18 1:44 PM:
--

[~asuresh] [~kkaranasos] thanks for the feedback.

Please check the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.


was (Author: pgaref):
[~asuresh] [~kkaranasos] thanks for the feedback.

Please find attached the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.002.patch

[~asuresh] [~kkaranasos] thanks for the feedback.

Please find attached the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-28 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305548#comment-16305548
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 12/28/17 3:52 PM:


Attaching a first version of the patch.
PlacementConstaintsUtl class is now returning if a node is a valid placement 
for a set of allocationTags.
Currently supporting SingleConstaints as discussed and both scopes Node and 
Rack.
An interesting fact is during the first allocation where no tags exist affinity 
would always fail if we wanted to ensure minCardinality is always >=1. I fixed 
that by checking if it is the first application allocation.

However for more generic scenarios like cardinality there are different ways to 
tackle the problem. For example:
{code:java}
 {NODE, 5, 10,allocationTag("spark")} 
{code}
should we promote affinity on Nodes where cMin is < 5 or just ensure cMax is <= 
10 ?
[~asuresh] [~kkaranasos] Thoughts?


was (Author: pgaref):
Attaching a first version of the patch.
PlacementConstaintsUtl class is now returning if a node is a valid placement 
for a set of allocationTags.
Currently supporting SingleConstaints as discussed and both scopes Node and 
Rack.
An interesting fact is during the first allocation where no tags exist affinity 
would always fail if we wanted to ensure minCardinality is always >=1. I fixed 
that by checking if it is the first application allocation.

However for more generic scenarios like cardinality there are different ways to 
tackle the problem. For example:
{code:java}
 {NODE, 2, 10,allocationTag("spark")} 
{code}
should we promote affinity on Nodes where cMin is <= 2 or just ensure cMax is 
<= 10 ?
[~asuresh] [~kkaranasos] Thoughts?

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-28 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305548#comment-16305548
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 12/28/17 3:51 PM:


Attaching a first version of the patch.
PlacementConstaintsUtl class is now returning if a node is a valid placement 
for a set of allocationTags.
Currently supporting SingleConstaints as discussed and both scopes Node and 
Rack.
An interesting fact is during the first allocation where no tags exist affinity 
would always fail if we wanted to ensure minCardinality is always >=1. I fixed 
that by checking if it is the first application allocation.

However for more generic scenarios like cardinality there are different ways to 
tackle the problem. For example:
{code:java}
 {NODE, 2, 10,allocationTag("spark")} 
{code}
should we promote affinity on Nodes where cMin is <= 2 or just ensure cMax is 
<= 10 ?
[~asuresh] [~kkaranasos] Thoughts?


was (Author: pgaref):
Attaching a first version of the patch.
PlacementConstaintsUtl class is now returning if a node is a valid placement 
for a set of allocationTags.
Currently supporting SingleConstaints as discussed and both scopes Node and 
Rack.
An interesting fact is during the first allocation where no tags exist affinity 
would always fail if we wanted to ensure minCardinality is always >=1. I fixed 
that by checking if it is the first application allocation.
However for more generic scenarios like cardinality there are different ways to 
tackle the problem. For example {NODE, 2, 10,allocationTag("spark")} should we 
promote affinity on Nodes where cMin is <= 2 or just ensure cMax is <= 10 ?
[~asuresh] [~kkaranasos] Thoughts?

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-28 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.001.patch

Attaching a first version of the patch.
PlacementConstaintsUtl class is now returning if a node is a valid placement 
for a set of allocationTags.
Currently supporting SingleConstaints as discussed and both scopes Node and 
Rack.
An interesting fact is during the first allocation where no tags exist affinity 
would always fail if we wanted to ensure minCardinality is always >=1. I fixed 
that by checking if it is the first application allocation.
However for more generic scenarios like cardinality there are different ways to 
tackle the problem. For example {NODE, 2, 10,allocationTag("spark")} should we 
promote affinity on Nodes where cMin is <= 2 or just ensure cMax is <= 10 ?
[~asuresh] [~kkaranasos] Thoughts?

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304982#comment-16304982
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/28/17 1:56 AM:


bq. Also - am assuming YARN-7682 patch will have a more fleshed-out canAssign - 
one that checks if placement does not violate affinity, anti-affinity or 
cardinality (assuming constraint can be transformed into a SingleConstraint)
[~asuresh] Yes, that's exactly my plan. I am working on YARN-7682 now so we can 
have a complete placementAlgorithm version after both patches merge


was (Author: pgaref):
bq. Also - am assuming YARN-7682 patch will have a more fleshed-out canAssign - 
one that checks if placement does not violate affinity, anti-affinity or 
cardinality (assuming constraint can be transformed into a SingleConstraint)
[~asuresh] Yes, that's exactly my plan. I am working on YARN-7682 now so we can 
have a complete end2end version after both patches merge

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613-YARN-6592.006.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.006.patch

Patch v006.
Cleaning unused variable from DefaultPlacementAlgorithm

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613-YARN-6592.006.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304982#comment-16304982
 ] 

Panagiotis Garefalakis commented on YARN-7613:
--

bq. Also - am assuming YARN-7682 patch will have a more fleshed-out canAssign - 
one that checks if placement does not violate affinity, anti-affinity or 
cardinality (assuming constraint can be transformed into a SingleConstraint)
[~asuresh] Yes, that's exactly my plan. I am working on YARN-7682 now so we can 
have a complete end2end version after both patches merge

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.005.patch

Patch v005
Fixing license, javadoc and whitespace warnings

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613-YARN-6592.005.patch, 
> YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.004.patch

Rebasing: Patch 004

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613-YARN-6592.004.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.003.patch

Patch v003 with: YarnConfigurationFields Test fix

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613-YARN-6592.003.patch, 
> YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-27 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.002.patch

Thanks for the comments [~asuresh]
Attaching patch 002:
* Removing dummy SamplePlacementAlgorithm - using DefaultPlacementAlgorithm 
instead.
* Using dummy canAssign to avoid test failures (will be fixed by YARN-7682)
* BatchedRequests now implement SchedulingRequests iterator.
* Renaming attemptAllocationOnNode to attemptPlacementOnNode.
* Fixing tests and default config.

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, 
> YARN-7613-YARN-6592.002.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304075#comment-16304075
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 12/26/17 11:43 PM:
-

bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following up on the discussion: My only concern is that PlacementConstraints 
util class is part of the API package and TagManager is part of the RM package. 
We would have to move one of them to avoid creating circular maven dependencies.

Thoughts?



was (Author: pgaref):
bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following the discussion: My only concern is that PlacementConstraints util 
class is part of the API package and TagManager is part of the RM package. We 
would have to move one of them to avoid creating circular maven dependencies.

Thoughts?


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304075#comment-16304075
 ] 

Panagiotis Garefalakis commented on YARN-7682:
--

bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following the discussion: My only concern is that PlacementConstraints util 
class is part of the API package and TagManager is part of the RM package. We 
would have to move one of them to avoid creating circular maven dependencies.

Thoughts?


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.001.patch

Attaching patch v001 taking into account the comments above:
* Extending AllocationTagsManager to keep track of temporary container tags 
during placement cycle.
* Removing applicationId from add/remove container methods since it can be 
derived from containerId.
* BasicPlacementAlgorithm implementation with two simple Iterators (Serial and 
PopularTags)

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304026#comment-16304026
 ] 

Panagiotis Garefalakis commented on YARN-7682:
--

Thanks for the comments [~asuresh] and [~kkaranasos]!

bq. If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization
Make sense to me

bq. We can support composite constraints as a second step (including delayed 
or).
Sure, working on an updated version with Simple constraints now.

bq. What about rack scope, given YARN-7653 is there
Agree, changing the API accordingly for the new patch 



> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682.wip.patch

Attaching a proof of concept patch after our discussion with [~asuresh].
canAssign method is now part of the placementConstraintManager and is 
responsible for single constrained allocations. 
[~kkaranasos] please take a look  - the main part missing is proper expression 
transformations that I guess should be treated differently depending on the 
type (Composite, Target, Single)?
We also have the delayedOr special case that should be taken into account at 
this level I believe.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-25 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303327#comment-16303327
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/25/17 5:29 PM:


Thanks for the comments [~asuresh]

bq. Based on discussions in YARN-7612, Lets move the canAssign method into a 
separate utility class and handle that in YARN-7681.

It should definitely be in a separate class but I am not sure it should be a 
utility one. It is part of constraint satisfaction so it looks quite 
fundamental to me - in a previous draft patch I had the method as part of the 
constraint manager for example which did make some sense.

bq. At the very least, it should take the container source tags (from the 
SchedulingRequest), the target SchedulerNode, the TagsManager and the 
ConstraintsManager - and return true if it can assign the source tag to the 
Scheduler node without violating any constraints.

ApplicationId is also needed since we query the tags based on that but I agree 
about the rest - would also pass the SchedulingRequest object instead of tags 
as they contain more information.
AlgorithmContext (described in your previous comment) could be helpful here 
since we would not have to pass Tags and Constraints Managers explicitly.



was (Author: pgaref):
Thanks for the comments [~asuresh]

bq. Based on discussions in YARN-7612, Lets move the canAssign method into a 
separate utility class and handle that in YARN-7681.

It should definitely be in a separate class but I am not sure it should be a 
utility one. It is part of constraint satisfaction so it looks quite 
fundamental to me - in a previous draft patch I had the method as part of the 
constraint manager for example which did make some sense.

bq. At the very least, it should take the container source tags (from the 
SchedulingRequest), the target SchedulerNode, the TagsManager and the 
ConstraintsManager - and return true if it can assign the source tag to the 
Scheduler node without violating any constraints.

ApplicationId is also needed since we query the tags based on that but I agree 
about the rest - would also pass the SchedulingRequest object instead of tags 
as they contain more information.
AlgorithmContext (described in your previous comment) could be helpful in this 
as well since we would not have to pass Tags and Constraints Managers 
explicitly.


> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-25 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303327#comment-16303327
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/25/17 5:28 PM:


Thanks for the comments [~asuresh]

bq. Based on discussions in YARN-7612, Lets move the canAssign method into a 
separate utility class and handle that in YARN-7681.

It should definitely be in a separate class but I am not sure it should be a 
utility one. It is part of constraint satisfaction so it looks quite 
fundamental to me - in a previous draft patch I had the method as part of the 
constraint manager for example which did make some sense.

bq. At the very least, it should take the container source tags (from the 
SchedulingRequest), the target SchedulerNode, the TagsManager and the 
ConstraintsManager - and return true if it can assign the source tag to the 
Scheduler node without violating any constraints.

ApplicationId is also needed since we query the tags based on that but I agree 
about the rest - would also pass the SchedulingRequest object instead of tags 
as they contain more information.
AlgorithmContext (described in your previous comment) could be helpful in this 
as well since we would not have to pass Tags and Constraints Managers 
explicitly.



was (Author: pgaref):
bq. Based on discussions in YARN-7612, Lets move the canAssign method into a 
separate utility class and handle that in YARN-7681.

It should definitely be in a separate class but I am not sure it should be a 
utility one. It is part of constraint satisfaction so it looks quite 
fundamental to me - in a previous draft patch I had the method as part of the 
constraint manager for example which did make some sense.

bq. At the very least, it should take the container source tags (from the 
SchedulingRequest), the target SchedulerNode, the TagsManager and the 
ConstraintsManager - and return true if it can assign the source tag to the 
Scheduler node without violating any constraints.

ApplicationId is also needed since we query the tags based on that but I agree 
about the rest - would also pass the SchedulingRequest object instead of tags 
as they contain more information.
AlgorithmContext (described in your previous comment) could be helpful in this 
as well since we would not have to pass Tags and Constraints Managers 
explicitly.


> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-25 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303327#comment-16303327
 ] 

Panagiotis Garefalakis commented on YARN-7613:
--

bq. Based on discussions in YARN-7612, Lets move the canAssign method into a 
separate utility class and handle that in YARN-7681.

It should definitely be in a separate class but I am not sure it should be a 
utility one. It is part of constraint satisfaction so it looks quite 
fundamental to me - in a previous draft patch I had the method as part of the 
constraint manager for example which did make some sense.

bq. At the very least, it should take the container source tags (from the 
SchedulingRequest), the target SchedulerNode, the TagsManager and the 
ConstraintsManager - and return true if it can assign the source tag to the 
Scheduler node without violating any constraints.

ApplicationId is also needed since we query the tags based on that but I agree 
about the rest - would also pass the SchedulingRequest object instead of tags 
as they contain more information.
AlgorithmContext (described in your previous comment) could be helpful in this 
as well since we would not have to pass Tags and Constraints Managers 
explicitly.


> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-23 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302467#comment-16302467
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/23/17 2:40 PM:


Attaching a proof of concept patch (applied cleanly on top of YARN-7612) that 
decouples Algorithm from constraints introducing canAssign method which 
navigates thought placement constraints and just return if a placement in a 
Node is feasible or not. Currently as part of the algorithm class (debatable).

Planning Algorithm should only add logic to the way we iterate through 
SchedulingRequests and cluster Nodes and not how we satisfy the constraints - 
this is the first step towards that goal


was (Author: pgaref):
Attaching a proof of concept patch (applied cleanly on top of YARN-7612) that 
decouples Algorithm from constraints introducing canAssign method which 
navigates thought placement constraints and just return if a placement in a 
Node is feasible or not.
Currently as part of the algorithm class

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-23 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302467#comment-16302467
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/23/17 2:38 PM:


Attaching a proof of concept patch (applied cleanly on top of YARN-7612) that 
decouples Algorithm from constraints introducing canAssign method which 
navigates thought placement constraints and just return if a placement in a 
Node is feasible or not.
Currently as part of the algorithm class


was (Author: pgaref):
Attaching a proof of concept patch (applied cleanly on top of YARN-7612) that 
decouples Algorithm from constraints introducing canAssign method which 
navigates thought placement constraints and just return if a placement in a 
Node is feasible or not.
Currently as part of the algorithm class but it should be a separate JIRA

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-23 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302467#comment-16302467
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/23/17 2:37 PM:


Attaching a proof of concept patch that decouples Algorithm from constraints 
introducing canAssign method which navigates thought placement constraints and 
just return if a placement in a Node is feasible or not.
Currently as part of the algorithm class but it should be a separate JIRA


was (Author: pgaref):
Attaching a proof of concept patch that decouples Algorithm from constraints 
introducing canAssign method which navigates thought placement constraints and 
just return if a placement in a Node is feasible or not.
Currently as part of the algorithm class

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-23 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302467#comment-16302467
 ] 

Panagiotis Garefalakis edited comment on YARN-7613 at 12/23/17 2:37 PM:


Attaching a proof of concept patch (applied cleanly on top of YARN-7612) that 
decouples Algorithm from constraints introducing canAssign method which 
navigates thought placement constraints and just return if a placement in a 
Node is feasible or not.
Currently as part of the algorithm class but it should be a separate JIRA


was (Author: pgaref):
Attaching a proof of concept patch that decouples Algorithm from constraints 
introducing canAssign method which navigates thought placement constraints and 
just return if a placement in a Node is feasible or not.
Currently as part of the algorithm class but it should be a separate JIRA

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-23 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613.wip.patch

Attaching a proof of concept patch that decouples Algorithm from constraints 
introducing canAssign method which navigates thought placement constraints and 
just return if a placement in a Node is feasible or not.
Currently as part of the algorithm class

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:45 PM:


[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

I would like to keep this JIRA in order to efficiently manage container tags 
under all possible Container state transitions (EXPIRED, RELEASED, KILLED 
e.t.c). Currently we support only container allocation and completion states 
just as a proof of concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:42 PM:


[~cheersyang]
(YARN-7522) introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
(YARN-7653) added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
(YARN-7522)[https://issues.apache.org/jira/browse/YARN-7522] introduced 
AllocationTagManager component which is storing simple node to 
application-container mappings.
(YARN-7653)[https://issues.apache.org/jira/browse/YARN-7653] added support for 
node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301574#comment-16301574
 ] 

Panagiotis Garefalakis commented on YARN-6597:
--

[~cheersyang]
(YARN-7522)[https://issues.apache.org/jira/browse/YARN-7522] introduced 
AllocationTagManager component which is storing simple node to 
application-container mappings.
(YARN-7653)[https://issues.apache.org/jira/browse/YARN-7653] added support for 
node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:42 PM:


[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
(YARN-7522) introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
(YARN-7653) added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301091#comment-16301091
 ] 

Panagiotis Garefalakis edited comment on YARN-7653 at 12/22/17 8:15 AM:


Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve .i.e.: 
a global count of tag "mapreduce" across all applications 

Periodic cleaning could be part of another Jira I agree.


was (Author: pgaref):
Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve a 
global count of tag "mapreduce" across all applications for example. 

Periodic cleaning could be part of another Jira I agree.

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301091#comment-16301091
 ] 

Panagiotis Garefalakis commented on YARN-7653:
--

Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve a 
global count of tag "mapreduce" across all applications for example. 

Periodic cleaning could be part of another Jira I agree.

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7653) Node group support for AllocationTagsManager

2017-12-21 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7653:
-
Attachment: YARN-7653-YARN-6592.003.patch

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7653) Node group support for AllocationTagsManager

2017-12-21 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7653:
-
Attachment: YARN-7653-YARN-6592.002.patch

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7653) Node group support for AllocationTagsManager

2017-12-21 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7653:
-
Attachment: YARN-7653-YARN-6592.001.patch

Extending AllocationTagsManager to support Rack scope needed for our placement 
algorithm.
In order to do so, internal class NodeToCountedTags is now a generic mapping 
between Type T and a Map.
Manager is now holding both node and rack to ApplicationID and global counters.
Also added relevant test cases.

[~asuresh] [~kkaranasos] Please take a look

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7653) Node group support for AllocationTagsManager

2017-12-21 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-7653:


Assignee: Panagiotis Garefalakis

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290017#comment-16290017
 ] 

Panagiotis Garefalakis edited comment on YARN-7522 at 12/13/17 10:10 PM:
-

Agreed [~kkaranasos]. New Jira  filled under  
[YARN-7653|https://issues.apache.org/jira/browse/YARN-7653]


was (Author: pgaref):
Agreed [~kkaranasos]. New Jira  filled under [#YARN-7653]

> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290017#comment-16290017
 ] 

Panagiotis Garefalakis edited comment on YARN-7522 at 12/13/17 10:10 PM:
-

Agreed [~kkaranasos]. New Jira  filled under [#YARN-7653]


was (Author: pgaref):
Agreed [~kkaranasos]. New Jira  filled under 
https://issues.apache.org/jira/browse/YARN-7653

> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290017#comment-16290017
 ] 

Panagiotis Garefalakis commented on YARN-7522:
--

Agreed [~kkaranasos]. New Jira  filled under 
https://issues.apache.org/jira/browse/YARN-7653

> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7653) Node group support for AllocationTagsManager

2017-12-13 Thread Panagiotis Garefalakis (JIRA)
Panagiotis Garefalakis created YARN-7653:


 Summary: Node group support for AllocationTagsManager
 Key: YARN-7653
 URL: https://issues.apache.org/jira/browse/YARN-7653
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Panagiotis Garefalakis


AllocationTagsManager currently supports node and cluster-wide tag cardinality 
retrieval.
If we want to support arbitrary node-groups/scopes for our placement 
constraints TagsManager should be extended to provide such functionality.
As a first step we need to support RACK scope cardinality retrieval (as defined 
in our API).
i.e. how many "spark" containers are currently running on "RACK-1"




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289939#comment-16289939
 ] 

Panagiotis Garefalakis edited comment on YARN-7522 at 12/13/17 9:33 PM:


I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I believe 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda] [~kkaranasos]?



was (Author: pgaref):
I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I believe 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda][~kkaranasos]?


> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289939#comment-16289939
 ] 

Panagiotis Garefalakis edited comment on YARN-7522 at 12/13/17 9:33 PM:


I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I believe 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda][~kkaranasos]?



was (Author: pgaref):
I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I believe 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda]?


> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289939#comment-16289939
 ] 

Panagiotis Garefalakis edited comment on YARN-7522 at 12/13/17 9:32 PM:


I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I believe 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda]?



was (Author: pgaref):
I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I think 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda]?


> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Introduce AllocationTagsManager to associate allocation tags to nodes

2017-12-13 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289939#comment-16289939
 ] 

Panagiotis Garefalakis commented on YARN-7522:
--

I know this patch is committed already but one thing I can see missing from the 
TagsManager is proper support for different constraint scopes.
For example, while it is fairly easy to retrieve tag cardinalities per Node or 
cluster-wide there is no straightforward way to do this for  Rack - I think 
this should be part of the TagManager as well.

Any thoughts [~asuresh] [~wangda]?


> Introduce AllocationTagsManager to associate allocation tags to nodes
> -
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-6592
>
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-12 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287710#comment-16287710
 ] 

Panagiotis Garefalakis commented on YARN-7612:
--

Thanks [~asuresh] for the patch and [~leftnoteasy] for the input.
Some comments from my side:
* I like the latest common interfaces split - not sure why PlacementAlgorithm 
abstract class is not part of  constraint.spi package as well?
* RejectedReason ENUMS look confusing to me - something like  
INVALID_PLACEMENT_REJECTION and INFEASIBLE_PLACEMENT_REJECTION seem more 
reasonable to me.
* Currently PlacementConstraintsManager interface considers constraints as part 
of appIDs but we also need to support **cluster wide* constraints - so I would 
add another more generic setter and getter as well 
* Looking at the SamplePlacementAlgorithm implementation one thing we are not 
currently taking into to account is the previously placed SchedulingRequests 
tags of the current Batch (by previously I mean in the previous iterations) 
where the containers are still not launched thus the TagManager is not aware of 
their existence. One way to solve this would be having the algorithm 
implementation to keep an extra data structure with the placed tags, another 
would be to extend the TagManager to keep a temporary mapping. Any thoughts on 
that?

Minor:
* Unused imports interface ResourceScheduler
* Typo (placementConstriantsManager) affecting RMContext and 
RMActiveServiceContext
* AbstractYarnScheduler changes do not seem to apply cleanly to me


> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6942) Add examples for placement constraints usage in applications

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-6942:


Assignee: Panagiotis Garefalakis

> Add examples for placement constraints usage in applications
> 
>
> Key: YARN-6942
> URL: https://issues.apache.org/jira/browse/YARN-6942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA will include examples of how the new {{PlacementConstraints}} API 
> can be used by various applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257195#comment-16257195
 ] 

Panagiotis Garefalakis edited comment on YARN-7448 at 11/17/17 4:49 PM:


[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v007 where schedulingRequests method is the only 
setter for the schedulingRequests list.



was (Author: pgaref):
[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v7 where schedulingRequests method is the only 
setter for the schedulingRequests list.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.009.patch

[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v7 where schedulingRequests method is the only 
setter for the schedulingRequests list.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.008.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.007.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.006.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.005.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.004.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: (was: YARN-7448-YARN-6592.004.patch)

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: (was: YARN-7448-YARN-6592.005.patch)

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.005.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255864#comment-16255864
 ] 

Panagiotis Garefalakis edited comment on YARN-7448 at 11/16/17 7:50 PM:


Thanks for assigning [~asuresh] 

bq. You mean the SchedulingRequestBuilder constructor - True, but neither are a 
bunch of other things like PlacementConstraints, Allocation tags etc. But, 
given that it is probably a required field - might not be a bad idea to add it.
I meant the SchedulingRequest newInstance method - it seems resourceSizing 
parameter was never used.

bq. Will assign this to you - can you put in another patch with the extra 
newInstance method ?
Sure, I am submitting now a new patch with some extra tests and the latest 
AllocateRequest newInstance method.



was (Author: pgaref):
Thanks for assigning [~asuresh] 

> You mean the SchedulingRequestBuilder constructor - True, but neither are a 
> bunch of other things like PlacementConstraints, Allocation tags etc. But, 
> given that it is probably a required field - might not be a bad idea to add 
> it.
I meant the SchedulingRequest newInstance method - it seems resourceSizing 
parameter was never used.

> Will assign this to you - can you put in another patch with the extra 
> newInstance method ?
Sure, I am submitting now a new patch with some extra tests and the latest 
AllocateRequest newInstance method.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.004.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255864#comment-16255864
 ] 

Panagiotis Garefalakis commented on YARN-7448:
--

Thanks for assigning [~asuresh] 

> You mean the SchedulingRequestBuilder constructor - True, but neither are a 
> bunch of other things like PlacementConstraints, Allocation tags etc. But, 
> given that it is probably a required field - might not be a bad idea to add 
> it.
I meant the SchedulingRequest newInstance method - it seems resourceSizing 
parameter was never used.

> Will assign this to you - can you put in another patch with the extra 
> newInstance method ?
Sure, I am submitting now a new patch with some extra tests and the latest 
AllocateRequest newInstance method.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.003.patch

Addressing comments based on [~asuresh] patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-16 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255653#comment-16255653
 ] 

Panagiotis Garefalakis commented on YARN-7448:
--

[~asuresh] patch looks good to me.
Some minor comments: 
* We need to expose an AllocateRequest new instance with schedulingRequest List 
as parameter
* ResourceSizing is never set in the SchedulingRequest constructor
* We might need to implement equals and hashcode for the complex ResourceSizing 
object 

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054627#comment-16054627
 ] 

Panagiotis Garefalakis commented on YARN-6722:
--

[~ajisakaa] it seems that Jenkins did not clean the project before building 
(still using hadoop alpha3 in pom files). What is the best way to fix this?


> Bumping up pom file hadoop version
> --
>
> Key: YARN-6722
> URL: https://issues.apache.org/jira/browse/YARN-6722
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-6722-yarn-native-services.001.patch
>
>
> Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
> slider pom files were compiled against hadoop3.0.0-alpha3
> This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6722:
-
Attachment: YARN-6722-yarn-native-services.001.patch

> Bumping up pom file hadoop version
> --
>
> Key: YARN-6722
> URL: https://issues.apache.org/jira/browse/YARN-6722
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-6722-yarn-native-services.001.patch
>
>
> Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
> slider pom files were compiled against hadoop3.0.0-alpha3
> This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6722:
-
Attachment: (was: YARN-6722.patch)

> Bumping up pom file hadoop version
> --
>
> Key: YARN-6722
> URL: https://issues.apache.org/jira/browse/YARN-6722
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>
> Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
> slider pom files were compiled against hadoop3.0.0-alpha3
> This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6722:
-
Attachment: YARN-6722.patch

> Bumping up pom file hadoop version
> --
>
> Key: YARN-6722
> URL: https://issues.apache.org/jira/browse/YARN-6722
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>
> Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
> slider pom files were compiled against hadoop3.0.0-alpha3
> This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6722:
-
Flags: Patch

> Bumping up pom file hadoop version
> --
>
> Key: YARN-6722
> URL: https://issues.apache.org/jira/browse/YARN-6722
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>
> Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
> slider pom files were compiled against hadoop3.0.0-alpha3
> This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6722) Bumping up pom file hadoop version

2017-06-19 Thread Panagiotis Garefalakis (JIRA)
Panagiotis Garefalakis created YARN-6722:


 Summary: Bumping up pom file hadoop version
 Key: YARN-6722
 URL: https://issues.apache.org/jira/browse/YARN-6722
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-native-services
Reporter: Panagiotis Garefalakis
Assignee: Panagiotis Garefalakis


Hadoop version was recently changed to 3.0.0-alpha4 while services-api and 
slider pom files were compiled against hadoop3.0.0-alpha3

This Jira is bumping up hadoop version to avoid compilation issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-05-18 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016212#comment-16016212
 ] 

Panagiotis Garefalakis commented on YARN-6593:
--

[~kkaranasos] thanks for the patch! 

The discussion totally makes sense to me. Some comments:

*  Totally agree on using a more object oriented way of representing both 
PlacementConstraint -> CompoundPlacementConstraint/SimplePlacementConstraint 
and SimplePlacementConstraint -> TargetConstraint/CardinalityConstraint. I 
think the main value for doing so is usability.
*  Protobuf extentions might also be something we could use. For example:
{code:java}
message TargetConstraintProto {
  extend SimplePlacementConstraintProto
  {
required TargetConstraintProto costraint = 10; // Unique extension number
  }
}

message CardinalityConstraintProto {
  extend SimplePlacementConstraintProto
  {
required CardinalityConstraintProto costraint = 11; // Unique extension 
number
  }
}
{code}

*  We will definitely need a validator implementation - also as a way to ensure 
users type constraints that do make sense
*  I am also wondering if IN_ANY should be a separate **TargetOperator**  - in 
a case like C5 design-doc example we would avoid using any TargetValues

Panagiotis


> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6347) Store container tags in ResourceManager

2017-03-15 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-6347:


Assignee: Panagiotis Garefalakis

> Store container tags in ResourceManager
> ---
>
> Key: YARN-6347
> URL: https://issues.apache.org/jira/browse/YARN-6347
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> In YARN-6345 we introduce the notion of container tags.
> In this JIRA, we will create a service in the RM, similar to the Node Labels 
> Manager, that will store the tags of each active container.
> Note that a node inherits the tags of all containers that are running on that 
> node at each moment. Therefore, the container tags can be seen as dynamic 
> node labels. The suggested service will allow us to efficiently retrieve the 
> container tags of each node.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6345) Add container tags to resource requests

2017-03-15 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-6345:


Assignee: Panagiotis Garefalakis

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5468) Scheduling of long-running applications

2016-08-03 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-5468:
-
Attachment: YARN-5468.prototype.patch

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5468.prototype.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5468) Scheduling of long-running applications

2016-08-03 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-5468:
-
Attachment: (was: YARN-5468.prototype.patch)

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5468.prototype.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5468) Scheduling of long-running applications

2016-08-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405052#comment-15405052
 ] 

Panagiotis Garefalakis edited comment on YARN-5468 at 8/3/16 12:03 AM:
---

Attaching a patch to showcase above proposal. 

In this first patch we are introducing allocation tags and three placement 
constraints: affinity, anti-affinity, cardinality. We are planning to 
consolidate those in a single constraint in the 2nd version of the patch. For 
the time being we do not support time constraints.

In the current version the requests are accommodated in an online, greedy 
fashion.

We extend distribute-shell application Client and AM to demonstrate inter-job 
placement constraints. Some unit-tests are also included to show the supported 
constraints (affinity, anti-affinity, and cardinality) in Node and Rack level.


was (Author: pgaref):
Attaching a patch to showcase above proposal. 

In this first patch we are introducing allocation tags and three placement 
constraints: affinity, anti-affinity, cardinality. We are planning to 
consolidate those in a single constraint in the 2nd version of the patch. For 
the time being we do not support time constraints.

We extend distribute-shell Client and AM to demonstrate affinity inter-job 
constraints. Some unit-tests are also included to show the supported 
constraints (affinity, anti-affinity, and cardinality) in Node and Rack level.

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5468.prototype.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5468) Scheduling of long-running applications

2016-08-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-5468:
-
Comment: was deleted

(was: Uploading a first prototype)

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5468) Scheduling of long-running applications

2016-08-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-5468:
-
Attachment: LRS-Constraints-v2.patch

Uploading a first prototype

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: LRS-Constraints-v2.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org