[
https://issues.apache.org/jira/browse/YARN-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16475062#comment-16475062
]
Wangda Tan commented on YARN-8250:
----------------------------------
[~haibochen],
I took a very brief look at implemented code since I couldn't find a chance to
read through the implementation.
My thoughts:
- To me it is important to have a single implementation with different policies
or just fix it correctly. Otherwise it will enter the CS vs. FS issue short
after this.
- For 2), it looks like an issue we need to fix: why we want to keep the logic
to aggressively launch O containers and let them killed by framework shortly
after launch.
- For 1) I'm not sure if we should give all the decisions to CGroups. In some
cases kill container cannot be done immediately by system IIRC (like docker
container) , it's better to look at existing status of running containers
before launch a container.
> Create another implementation of ContainerScheduler to support NM
> overallocation
> --------------------------------------------------------------------------------
>
> Key: YARN-8250
> URL: https://issues.apache.org/jira/browse/YARN-8250
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Haibo Chen
> Assignee: Haibo Chen
> Priority: Major
> Attachments: YARN-8250-YARN-1011.00.patch,
> YARN-8250-YARN-1011.01.patch, YARN-8250-YARN-1011.02.patch
>
>
> YARN-6675 adds NM over-allocation support by modifying the existing
> ContainerScheduler and providing a utilizationBased resource tracker.
> However, the implementation adds a lot of complexity to ContainerScheduler,
> and future tweak of over-allocation strategy based on how much containers
> have been launched is even more complicated.
> As such, this Jira proposes a new ContainerScheduler that always launch
> guaranteed containers immediately and queues opportunistic containers. It
> relies on a periodical check to launch opportunistic containers.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]