[ 
https://issues.apache.org/jira/browse/YARN-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16860157#comment-16860157
 ] 

Weiwei Yang commented on YARN-9598:
-----------------------------------

Thanks for bringing this up and the discussions. It looks like the discussion 
goes diverse somehow. Let's make sure we understand the problem we want to 
resolve here.

If I understand correctly, [~jutia] was observing the issue that 
re-reservations are made on a single node because the policy always returns the 
same order. Actually, this is not the only issue, this policy may cause 
hot-spot node when multiple threads put allocations to same ordered nodes. I 
think we need to improve the policy, one possible solution like I previously 
commented, shuffle nodes per score-range. BTW, [~jutia], are you using this 
policy already in your cluster?  

The issue [~Tao Yang] raised is also valid, re-reservations were done by a lot 
of small asks happening on lots of nodes (when the cluster is busy), it will 
cause big players to be starving. This issue should be reproducible with SLS. I 
did a quick look at the patch [~Tao Yang] uploaded, but I also have the concern 
to disable re-reservation. How can we make sure a big container request not 
getting starved in such case? Maybe a way to improve this is to swap reserved 
container on NMs, e.g if a container is already reserved on somewhere else, 
then we can swap this spot with another bigger container that has no 
reservation yet. Just a random thought.

 

> Make reservation work well when multi-node enabled
> --------------------------------------------------
>
>                 Key: YARN-9598
>                 URL: https://issues.apache.org/jira/browse/YARN-9598
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Tao Yang
>            Assignee: Tao Yang
>            Priority: Major
>         Attachments: YARN-9598.001.patch, image-2019-06-10-11-37-43-283.png, 
> image-2019-06-10-11-37-44-975.png
>
>
> This issue is to solve problems about reservation when multi-node enabled:
>  # As discussed in YARN-9576, re-reservation proposal may be always generated 
> on the same node and break the scheduling for this app and later apps. I 
> think re-reservation in unnecessary and we can replace it with 
> LOCALITY_SKIPPED to let scheduler have a chance to look up follow candidates 
> for this app when multi-node enabled.
>  # Scheduler iterates all nodes and try to allocate for reserved container in 
> LeafQueue#allocateFromReservedContainer. Here there are two problems:
>  ** The node of reserved container should be taken as candidates instead of 
> all nodes when calling FiCaSchedulerApp#assignContainers, otherwise later 
> scheduler may generate a reservation-fulfilled proposal on another node, 
> which will always be rejected in FiCaScheduler#commonCheckContainerAllocation.
>  ** Assignment returned by FiCaSchedulerApp#assignContainers could never be 
> null even if it's just skipped, it will break the normal scheduling process 
> for this leaf queue because of the if clause in LeafQueue#assignContainers: 
> "if (null != assignment) \{ return assignment;}"
>  # Nodes which have been reserved should be skipped when iterating candidates 
> in RegularContainerAllocator#allocate, otherwise scheduler may generate 
> allocation or reservation proposal on these node which will always be 
> rejected in FiCaScheduler#commonCheckContainerAllocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to