[ 
https://issues.apache.org/jira/browse/YARN-7783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16336820#comment-16336820
 ] 

Arun Suresh commented on YARN-7783:
-----------------------------------

[~cheersyang], w.r.t your idea of binding node constraints - I like the idea. 
Do open a JIRA and we can iterate - but lets do that after merge. We should do 
it in conjunction with YARN-6621, where we validate at constraint registration 
time.

bq. One more concern is, when there is a lot of allocate calls, we might get 
too much sync'd on AllocationTagsManager, as it's been accessed more than 
once....
So prior to the patch, the ATM is called in the doPlacement method: worst case 
is O(n) where n = num(nodes).
This patch adds the validate method which makes 2 extra calls to the ATM - but 
the validate method gets called for every *placed* request in the previous 
round, which is probably an order of magnitude less than the number of nodes in 
a large cluster - So I don't expect a big increment in the access to the ATM. 
But you are right, we should do a more disciplined testing.

So, if you are fine with this in the short term, can I get a +1 so we can 
commit this ?



> Add validation step to ensure constraints are not violated due to order in 
> which a request is processed
> -------------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7783
>                 URL: https://issues.apache.org/jira/browse/YARN-7783
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Arun Suresh
>            Assignee: Arun Suresh
>            Priority: Blocker
>         Attachments: YARN-7783-YARN-6592.001.patch, 
> YARN-7783-YARN-6592.002.patch, YARN-7783-YARN-6592.003.patch, 
> YARN-7783-YARN-6592.004.patch
>
>
> When the algorithm has placed a container on a node, allocation tags are 
> added to the node if the constraint is satisfied, But depending on the order 
> in which the algorithm sees the request, it is possible that a constraint 
> that happen to be valid during placement of an earlier-seen request, might 
> not be valid after all subsequent requests have been placed.
> For eg:
> Assume nodes n1, n2, n3, n4 and n5
> Consider the 2 constraints:
> # *foo* -> anti-affinity with *foo*
> # *bar* -> anti-affinity with *foo*
> And 2 requests
> # req1: NumAllocations = 4, allocTags = [foo]
> # req2: NumAllocations = 1, allocTags = [bar]
> If *req1* is seen first, the algorithm can place the 4 containers in n1, n2, 
> n3 and n4. And when it gets to *req2*, it will see that 4 nodes have the 
> *foo* tag and will place it on n5. But if *req2* is seen first, then *bar* 
> tag will be placed on any node, since no node will at that point have *foo*, 
> and then when it gets to *req1*, since *foo* has no anti-affinity with *bar*, 
> the algorithm can end up placing *foo* on a node with *bar* violating the 
> second constraint.
> To prevent the above, we need a validation step: after the placements for a 
> batch of requests are made, then for each req, we remove its tags from the 
> node and try to see of constraints are still satisfied if the tag were to be 
> added back on the node.
> When applied to the example above, after the algorithm has run through *req2* 
> and then *req1*, we remove the *bar* tag from the node and try to add it back 
> on the node. This time, constraint satisfaction will fail, since there is now 
> a *foo* tag on the node and *bar* cannot be added. The algorithm will then 
> retry placing *req2* on another node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to