[ 
https://issues.apache.org/jira/browse/YARN-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16859810#comment-16859810
 ] 

Tao Yang edited comment on YARN-9598 at 6/10/19 8:14 AM:
---------------------------------------------------------

As I commented 
[above|https://issues.apache.org/jira/browse/YARN-9598?focusedCommentId=16859709&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16859709],
 re-reservation is harmful in multi-nodes scenarios, it can make a low-priority 
app get much more resources than needs which won't be released util all the 
needs satisfied, it's inefficient for the cluster utilization and can block 
requirements from high-priority apps.
I think we should have a further discuss about this, a simple way is to add a 
configuration to control enable/disable which can be decided by users 
themselves, and a node-sorting policy which can put nodes with reserved 
containers in the back of sorting nodes is needed if re-reservation enabled. 
Thoughts? 
cc: [~cheersyang]


was (Author: tao yang):
As I commented 
[above|https://issues.apache.org/jira/browse/YARN-9598?focusedCommentId=16859709&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16859709],
 re-reservation is harmful in multi-nodes scenarios, it can make a low-priority 
app get much more resources than needs which won't be released util all the 
needs satisfied, it's inefficient for the cluster utilization and can block 
requirements from high-priority apps.
I think we should have a further discuss about this, a simple way is to add a 
configuration to control enable/disable which can be decided by users 
themselves, and a node-sorting policy which can put nodes with reserved 
containers in the back of sorting nodes if re-reservation enabled. Thoughts? 
cc: [~cheersyang]

> Make reservation work well when multi-node enabled
> --------------------------------------------------
>
>                 Key: YARN-9598
>                 URL: https://issues.apache.org/jira/browse/YARN-9598
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Tao Yang
>            Assignee: Tao Yang
>            Priority: Major
>         Attachments: YARN-9598.001.patch, image-2019-06-10-11-37-43-283.png, 
> image-2019-06-10-11-37-44-975.png
>
>
> This issue is to solve problems about reservation when multi-node enabled:
>  # As discussed in YARN-9576, re-reservation proposal may be always generated 
> on the same node and break the scheduling for this app and later apps. I 
> think re-reservation in unnecessary and we can replace it with 
> LOCALITY_SKIPPED to let scheduler have a chance to look up follow candidates 
> for this app when multi-node enabled.
>  # Scheduler iterates all nodes and try to allocate for reserved container in 
> LeafQueue#allocateFromReservedContainer. Here there are two problems:
>  ** The node of reserved container should be taken as candidates instead of 
> all nodes when calling FiCaSchedulerApp#assignContainers, otherwise later 
> scheduler may generate a reservation-fulfilled proposal on another node, 
> which will always be rejected in FiCaScheduler#commonCheckContainerAllocation.
>  ** Assignment returned by FiCaSchedulerApp#assignContainers could never be 
> null even if it's just skipped, it will break the normal scheduling process 
> for this leaf queue because of the if clause in LeafQueue#assignContainers: 
> "if (null != assignment) \{ return assignment;}"
>  # Nodes which have been reserved should be skipped when iterating candidates 
> in RegularContainerAllocator#allocate, otherwise scheduler may generate 
> allocation or reservation proposal on these node which will always be 
> rejected in FiCaScheduler#commonCheckContainerAllocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to