[
https://issues.apache.org/jira/browse/YARN-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16476088#comment-16476088
]
Wangda Tan commented on YARN-8250:
----------------------------------
Thanks [~haibochen] for the detailed explanation,
bq. Not sure what can be done here to unify the two, as they fundamentally have
issues with the other one's approach. Hence, the proposal to have two
implementations.
It it possible to make a pluggable policy to check: if it is possible to launch
a container X, which returns true or false. For existing container scheduler,
it is always true. And for over allocation case, it talks to the policy and
decide. Related code could be pulled to the separate policy if possible.
Maybe I didn't get the full picture, but from what I can see, there's still no
fundamental issue which blocks us making a same implementation (with pluggable
policies) for the two scenarios.
> Create another implementation of ContainerScheduler to support NM
> overallocation
> --------------------------------------------------------------------------------
>
> Key: YARN-8250
> URL: https://issues.apache.org/jira/browse/YARN-8250
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Haibo Chen
> Assignee: Haibo Chen
> Priority: Major
> Attachments: YARN-8250-YARN-1011.00.patch,
> YARN-8250-YARN-1011.01.patch, YARN-8250-YARN-1011.02.patch
>
>
> YARN-6675 adds NM over-allocation support by modifying the existing
> ContainerScheduler and providing a utilizationBased resource tracker.
> However, the implementation adds a lot of complexity to ContainerScheduler,
> and future tweak of over-allocation strategy based on how much containers
> have been launched is even more complicated.
> As such, this Jira proposes a new ContainerScheduler that always launch
> guaranteed containers immediately and queues opportunistic containers. It
> relies on a periodical check to launch opportunistic containers.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]