[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300458#comment-16300458
 ] 

Konstantinos Karanasos commented on YARN-6596:
----------------------------------------------

Thanks for the comments, [~asuresh], [~sunilg], and [~cheersyang].
As [~asuresh] mentioned, I think we are good in terms of RM/AM failover.

[~sunilg], regarding your comments:
bq. However I am not very sure about the global constraint which you mentioned 
in this patch. When YARN-3409 is coming in, admin could set few constraints and 
placement manager may might derive its to map to app's demand. Could you 
provide more clarity on this global constraints (use case).
The idea is that a cluster admin should be able to put constraints that will be 
applicable to any allocation that has some specific tags. For instance, think 
that an admin wants to not allow more than 10 Spark containers per rack or more 
than 2 HBase region servers per node. Then they will add these two constraints 
in the Placement Constraint Manager.
When an application tries to place its containers, if they have a "Spark" or 
"HBase" tag, these constraints will have to be applied, along with any other 
application specific constraints. I will introduce some transformations that 
will allow this combination of constraints to happen.
I think we can use the same or similar API that we will use for adding global 
constraints for the node attributes. Then, we can for instance say that HBase 
containers should be on machines with java-8.
Does this clarify things?

bq. NodeLabelManager already have in-memory, file, zk(wip). Could we some how 
reuse or do in same line to reuse or formalize similar placement related infos.
Indeed, I defined an interface for the Placement Constraint Manager that will 
allow us to have different implementations, and have added the in memory 
implementation as a first version. I was thinking that we can add file, zk or 
DB implementations as follow-up JIRAs.
I took ideas from the NodeLabelManager and the FederationStateStore (chatted 
with [~subru] about it too).
I was thinking about unifying the code across the different *Managers as well, 
but I am not sure that it is possible, especially for the in-memory one, given 
that they use very different data structures. But maybe we could factor out 
some code for the file or zk implementations? Did you have something like this 
in mind?
I guess we could also have a common way to choose between different 
implementations (in-mem, file, etc.).

> Introduce Placement Constraint Manager module
> ---------------------------------------------
>
>                 Key: YARN-6596
>                 URL: https://issues.apache.org/jira/browse/YARN-6596
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Konstantinos Karanasos
>            Assignee: Konstantinos Karanasos
>         Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to