[
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384522#comment-15384522
]
Wangda Tan commented on YARN-5342:
----------------------------------
[~Naganarasimha],
bq. Because in next NonExclusive mode allocation for the node of this parition
might skip this application for which reset happened but might allocate to
another application but still that partition might have pending resource
requests.
IIUC, we now do allocation twice for shareable node partition, first one is for
exclusive allocation and second one is for shareable allocation. This is
already implicitly confirmed the non-exclusive allocation is safe.
Please let me know if I missed anything. I want to check this patch in as soon
as possible for 2.8 and do more comprehensive in follow up JIRAs.
Thanks,
> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> ------------------------------------------------------------------------------
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Wangda Tan
> Assignee: Sunil G
> Attachments: YARN-5342.1.patch, YARN-5342.2.patch
>
>
> In the previous implementation, one non-exclusive container allocation is
> possible when the missed-opportunity >= #cluster-nodes. And
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive
> node partition: *When a non-exclusive partition=x has idle resource, we can
> only allocate one container for this app in every
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0
> pending resource for the non-exclusive partition OR we get allocation from
> the default partition.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]