[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972069#comment-15972069
 ] 

Huangkaixuan edited comment on YARN-6344 at 4/18/17 3:34 AM:
-------------------------------------------------------------

Thanks, [~kkaranasos]. 
Since this seems to be a critical performance bug, do you plan to merge the fix 
for 2.8 or 2.7 ? BTW, How YARN community decide the patches get merged to 
branches? I think it's important for many many users of YARN 2.7 and 2.8 
releases.  [~leftnoteasy]


was (Author: huangkx6810):
Thanks, [~kkaranasos]. 
Since this seems to be a critical performance bug, do you plan to merge the fix 
for 2.8 or 2.7 ? BTW, How YARN community decide the patches get merged to 
branches? I think it's important for many many users of YARN 2.7 and 2.8 
releases.  [~wangda]

> Add parameter for rack locality delay in CapacityScheduler
> ----------------------------------------------------------
>
>                 Key: YARN-6344
>                 URL: https://issues.apache.org/jira/browse/YARN-6344
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Konstantinos Karanasos
>            Assignee: Konstantinos Karanasos
>             Fix For: 2.9.0, 3.0.0-alpha3
>
>         Attachments: YARN-6344.001.patch, YARN-6344.002.patch, 
> YARN-6344.003.patch, YARN-6344.004.patch, YARN-6344-branch-2.8.patch
>
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is 
> used: when scheduling opportunities for a scheduler key are more than the 
> value of this parameter, we relax locality and try to assign the container to 
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the 
> container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
> which is computed based on the number of outstanding requests for a specific 
> scheduler key, which is divided by the size of the cluster. 
> In case of applications that request containers in big batches (e.g., 
> traditional MR jobs), and for relatively small clusters, the 
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches, 
> this load factor takes a very small value, which leads to assigning 
> off-switch containers too soon. This situation is even more pronounced in big 
> clusters.
> For example, if an application requests only one container per request, the 
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for 
> off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to