[
https://issues.apache.org/jira/browse/YARN-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443261#comment-16443261
]
Arun Suresh commented on YARN-8178:
-----------------------------------
Thanks for raising this [~Zian Chen]. We were actually experimenting something
similar in YARN-6826. The difference being, we modify the scheduler to
automatically allocate Opp containers when apps request above queue capacity.
But, yeah, demotion is definitely cheaper than promotion. It would also be
interesting to make this scheduler agnostic. I would be interested in how this
progresses. Let me know if I can help with reviews/code.
> [Umbrella] Resource Over-commitment Based on Opportunistic Container
> Preemption
> -------------------------------------------------------------------------------
>
> Key: YARN-8178
> URL: https://issues.apache.org/jira/browse/YARN-8178
> Project: Hadoop YARN
> Issue Type: New Feature
> Components: capacity scheduler
> Reporter: Zian Chen
> Priority: Major
>
> We want to provide an opportunistic container-based solution to achieve more
> aggressive preemption with shorter preemption monitoring interval.
> Instead of allowing applications to allocate resources with a mix of
> guaranteed and opportunistic containers, we allow newly submitted
> applications to only contain guaranteed containers. Meanwhile, we change the
> preemption logic to, instead of killing containers, demote guaranteed
> containers into opportunistic ones, so that when there are new applications
> submitted, we can ensure that these containers can be launched by preempting
> opportunistic containers.
> This approach is related to YARN-1011 but achieves over-commitment in a
> different way. However, we rely on opportunistic container part implemented
> in YARN-1011 to make our design work well.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]