Carlo Curino commented on YARN-4195:

[~leftnoteasy], I think you are guessing right (with a wrong example)... The 
user during job/reservation submission can say {{GPU}} and internally the 
system will translate this in {{GPU_PUBLICIP OR GPU_NOT-PUBLICIP}} and thus 
match any container from either of the two underlying partitions. 

Even nicer would be to allow each node to carry an arbitrary set of labels 
({{GPU}}, {{PUBLICIP}}), and the system automatically infer partitions (from 
node label specification). A configuration helper tool could show the Admin the 
list partitions, and their capacity and help configure queues by specifying 
capacity allocations per-partition (or per-label with some validation happening 
behind the scene). As the number of "active" partitions (vs the number of all 
possible partitions) is typically much smaller (and bound by the number of 
nodes), this should be generally feasible. 

Speaking with [~kasha], this would also go very well with some of the ideas for 
a schedule refactoring / support for node labels in the {{FairScheduler}} he is 

> Support of node-labels in the ReservationSystem "Plan"
> ------------------------------------------------------
>                 Key: YARN-4195
>                 URL: https://issues.apache.org/jira/browse/YARN-4195
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Carlo Curino
>            Assignee: Carlo Curino
>         Attachments: YARN-4195.patch
> As part of YARN-4193 we need to enhance the InMemoryPlan (and related 
> classes) to track the per-label available resources, as well as the per-label
> reservation-allocations.

This message was sent by Atlassian JIRA

Reply via email to