[
https://issues.apache.org/jira/browse/YARN-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469001#comment-16469001
]
Haibo Chen commented on YARN-8250:
----------------------------------
Thanks for the review, [[email protected]]!
{quote}getContainersUtilization and updateContainersUtilization might need to
be synchronized or sampled (cloned).
{quote}
Most things in container scheduler are not synchronized, for the assumption
that almost everything handled by the single event dispatcher thread, unless it
is accessed by multiple threads. getContainersMonitor() is also just executed
by the dispatcher thread, so I'd tend to leave it as is.
shedQueuedOpportunisticContainers actually does LIFO. It does so by walking
from the beginning of the queue until the allowed number of containers, then
killing the rest of the queued containers till the end of the queue.
I'll update the patch with the rest of your comments.
> Create another implementation of ContainerScheduler to support NM
> overallocation
> --------------------------------------------------------------------------------
>
> Key: YARN-8250
> URL: https://issues.apache.org/jira/browse/YARN-8250
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Haibo Chen
> Assignee: Haibo Chen
> Priority: Major
> Attachments: YARN-8250-YARN-1011.00.patch
>
>
> YARN-6675 adds NM over-allocation support by modifying the existing
> ContainerScheduler and providing a utilizationBased resource tracker.
> However, the implementation adds a lot of complexity to ContainerScheduler,
> and future tweak of over-allocation strategy based on how much containers
> have been launched is even more complicated.
> As such, this Jira proposes a new ContainerScheduler that always launch
> guaranteed containers immediately and queues opportunistic containers. It
> relies on a periodical check to launch opportunistic containers.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]