[
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927509#comment-15927509
]
Carlo Curino commented on YARN-6344:
------------------------------------
I agree with what [~kkaranasos] said. In our clusters, the localityWaitFactor
(as it is today) it almost never leads to a reasonable behavior. For example,
in a 5k nodes clusters, a very large job with 10k outstanding asks will only
get to wait 2 (or up to 4) scheduling opportunities before giving up on the
rack and going for off-switch. The change [~kkaranasos] is proposing looked
reasonable (he will share the code soon). We have been flighting it in tests
clusters with good results, and will be running it in prod in the coming days.
I think we could probably retain the current behavior if rack-locality-delay is
not specified, but in most scenarios is equivalent to say "we don't care about
locality unless the job is many times bigger than the cluster" in which case,
we might just remove a bunch of code from RM. Am I missing something?
> Rethinking OFF_SWITCH locality in CapacityScheduler
> ---------------------------------------------------
>
> Key: YARN-6344
> URL: https://issues.apache.org/jira/browse/YARN-6344
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacityscheduler
> Reporter: Konstantinos Karanasos
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is
> used: when scheduling opportunities for a scheduler key are more than the
> value of this parameter, we relax locality and try to assign the container to
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the
> container anywhere in the cluster), we are using a {{localityWaitFactor}},
> which is computed based on the number of outstanding requests for a specific
> scheduler key, which is divided by the size of the cluster.
> In case of applications that request containers in big batches (e.g.,
> traditional MR jobs), and for relatively small clusters, the
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches,
> this load factor takes a very small value, which leads to assigning
> off-switch containers too soon. This situation is even more pronounced in big
> clusters.
> For example, if an application requests only one container per request, the
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for
> off-switch assignments.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]