[ 
https://issues.apache.org/jira/browse/YARN-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3983:
-----------------------------
    Attachment: YARN-3983.4.patch

Attached ver.4 patch addressed Jian's comments.

> Make CapacityScheduler to easier extend application allocation logic
> --------------------------------------------------------------------
>
>                 Key: YARN-3983
>                 URL: https://issues.apache.org/jira/browse/YARN-3983
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: YARN-3983.1.patch, YARN-3983.2.patch, YARN-3983.3.patch, 
> YARN-3983.4.patch
>
>
> While working on YARN-1651 (resource allocation for increasing container), I 
> found it is very hard to extend existing CapacityScheduler resource 
> allocation logic to support different types of resource allocation.
> For example, there's a lot of differences between increasing a container and 
> allocating a container:
> - Increasing a container doesn't need to check locality delay.
> - Increasing a container doesn't need to build/modify a resource request tree 
> (ANY->RACK/HOST).
> - Increasing a container doesn't need to check allocation/reservation 
> starvation (see {{shouldAllocOrReserveNewContainer}}).
> - After increasing a container is approved by scheduler, it need to update an 
> existing container token instead of creating new container.
> And there're lots of similarities when allocating different types of 
> resources.
> - User-limit/queue-limit will be enforced for both of them.
> - Both of them needs resource reservation logic. (Maybe continuous 
> reservation looking is needed for both of them).
> The purpose of this JIRA is to make easier extending CapacityScheduler 
> resource allocation logic to support different types of resource allocation, 
> make common code reusable, and also better code organization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to