Steve Loughran commented on YARN-796:

I'm not going to comment on the current architecture; I think I need to 
understand the proposals better to understand what is being proposed for the 
first iteration. And iterations are, as others propose, a good way to do it.

FWIW, the SLIDER-81 case is about allowing us to allocate bits of a YARN 
cluster explicitly to groups, having queues select labels should suffice there. 
Although you can get exclusive use of a node by asking for all its resources, 
that does not guarantee that a node will be free for your team (ignoring 

There's also a possible need in future: label-based block placement. Can I 
label a set of nodes "hbase-production" and be confident that >1 node in that 
set will have a copy of the hbase data blocks? I don't think its timely to 
address that today, but having the means to do so would be useful in future. 
That argues for having the HDFS layer being able to see/receive the same label 

h3. the patch

This is a major patch -- it always hurts me to see how much coding we need to 
do to work with protobuf, as that's a major portion of the diff.

# too much duplication of {{"-showLabels"}} and {{"-refreshLabels"}} strings in 
the code. These should be made constants somewhere.
# why is {{ getClusterNodeLabels() }} catching {{YarnException}} and rethrowing 
as an IOE? Can't it just be added to the signature?
# version of {{net.java.dev.eval}} dependency must go into hadoop-project POM.
# Could you use SLF4J for the logging in new classes...we're slowly moving 
towards that everywhere
# Label manager should just be a service in its own right. If you do want to 
wrap it in {{LabelManagementService}}
then can you (a) justify this and (b) match the full lifecycle
# I don't want yet another text file format for configuration. the label config 
should either be hadoop XML or some JSON syntax. Why? Helps other tools parse 
and generate.

h3. tests
# Tests assume that {{"/tmp/labelFile"}} is writeable; they should use 
{{"./target/labelFile"}}} or something else under {{./target}}
# use {{assertEquals}} in service state tests too
# why the sleep in setup? that adds 6 seconds/test method
# {{equalsIgnoreCase}} mustn't be used, go {{.toLower(LOCALE_EN).equals()}} for 
# there's a lot of testing that could be factored into commonality (probes for 
configs files, assertContains on labels). This will simplify the tests
# we'll need tests that the schedulers work with labels, obviously.

> Allow for (admin) labels on nodes and resource-requests
> -------------------------------------------------------
>                 Key: YARN-796
>                 URL: https://issues.apache.org/jira/browse/YARN-796
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>    Affects Versions: 2.4.1
>            Reporter: Arun C Murthy
>            Assignee: Wangda Tan
>         Attachments: LabelBasedScheduling.pdf, 
> Node-labels-Requirements-Design-doc-V1.pdf, YARN-796.patch, YARN-796.patch4
> It will be useful for admins to specify labels for nodes. Examples of labels 
> are OS, processor architecture etc.
> We should expose these labels and allow applications to specify labels on 
> resource-requests.
> Obviously we need to support admin operations on adding/removing node labels.

This message was sent by Atlassian JIRA

Reply via email to