[
https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299501#comment-15299501
]
Rohith Sharma K S commented on YARN-5139:
-----------------------------------------
Thanks [~leftnoteasy] for initiating this major change in allocation. +1 for
proposal. I believe this definitely improves in majority of 2 factors. Firstly
node locality hit rate. Secondly container allocation rate.
Some couple of doubts
# How are the nodes are grouped for each applications? Is it based on RR for
each applications? If so doesn't is increase sorting time for each applications
every time especially in large cluster deployment?
# Does allocation is fully independent of node heartbeat after this? I mean
asynchronous allocation?
> [Umbrella] Move YARN scheduler towards global scheduler
> -------------------------------------------------------
>
> Key: YARN-5139
> URL: https://issues.apache.org/jira/browse/YARN-5139
> Project: Hadoop YARN
> Issue Type: New Feature
> Reporter: Wangda Tan
> Assignee: Wangda Tan
> Attachments: wip-1.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to
> sub-optimal decisions because scheduler can only look at one node at the time
> when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
> Go to parentQueue
> Go to leafQueue
> for application in leafQueue.applications:
> for resource-request in application.resource-requests
> try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node
> constraints (give me "a && b || c") or anti-affinity (do not allocate HBase
> regionsevers and Storm workers on the same host), we may need to consider
> moving YARN scheduler towards global scheduling.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]