[
https://issues.apache.org/jira/browse/YARN-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16859790#comment-16859790
]
Tao Yang edited comment on YARN-9598 at 6/10/19 7:38 AM:
---------------------------------------------------------
It's weird to hear that preemption should depends on excess reservations.
I think inter-queue preemption can't happened because of resource fragmentation
while cluster resource still have 20GB available memory, right? That's indeed a
problem in current preemption logic of community. If it is, I think it's not
re-reservation's business but can be worked around by it, and re-reservation
may hardly help for this in a large cluster.
was (Author: tao yang):
It's weird to hear that preemption should depends on excess reservations.
I think inter-queue preemption can't happened because of resource fragmentation
while cluster resource still have 20GB available memory, right? That's indeed a
problem in current preemption logic of community. If it is, I think it's no
re-reservation's business but can be worked around by it, and re-reservation
may hardly help for this in a large cluster.
> Make reservation work well when multi-node enabled
> --------------------------------------------------
>
> Key: YARN-9598
> URL: https://issues.apache.org/jira/browse/YARN-9598
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacityscheduler
> Reporter: Tao Yang
> Assignee: Tao Yang
> Priority: Major
> Attachments: YARN-9598.001.patch, image-2019-06-10-11-37-43-283.png,
> image-2019-06-10-11-37-44-975.png
>
>
> This issue is to solve problems about reservation when multi-node enabled:
> # As discussed in YARN-9576, re-reservation proposal may be always generated
> on the same node and break the scheduling for this app and later apps. I
> think re-reservation in unnecessary and we can replace it with
> LOCALITY_SKIPPED to let scheduler have a chance to look up follow candidates
> for this app when multi-node enabled.
> # Scheduler iterates all nodes and try to allocate for reserved container in
> LeafQueue#allocateFromReservedContainer. Here there are two problems:
> ** The node of reserved container should be taken as candidates instead of
> all nodes when calling FiCaSchedulerApp#assignContainers, otherwise later
> scheduler may generate a reservation-fulfilled proposal on another node,
> which will always be rejected in FiCaScheduler#commonCheckContainerAllocation.
> ** Assignment returned by FiCaSchedulerApp#assignContainers could never be
> null even if it's just skipped, it will break the normal scheduling process
> for this leaf queue because of the if clause in LeafQueue#assignContainers:
> "if (null != assignment) \{ return assignment;}"
> # Nodes which have been reserved should be skipped when iterating candidates
> in RegularContainerAllocator#allocate, otherwise scheduler may generate
> allocation or reservation proposal on these node which will always be
> rejected in FiCaScheduler#commonCheckContainerAllocation.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]