[ 
https://issues.apache.org/jira/browse/SPARK-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-18967:
--------------------------------
    Fix Version/s:     (was: 2.2)
                   2.2.0

> Locality preferences should be used when scheduling even when delay 
> scheduling is turned off
> --------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18967
>                 URL: https://issues.apache.org/jira/browse/SPARK-18967
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 2.1.0
>            Reporter: Imran Rashid
>            Assignee: Imran Rashid
>             Fix For: 2.2.0
>
>
> If you turn delay scheduling off by setting {{spark.locality.wait=0}}, you 
> effectively turn off the use the of locality preferences when there is a bulk 
> scheduling event.  {{TaskSchedulerImpl}} will use resources based on whatever 
> random order it decides to shuffle them, rather than taking advantage of the 
> most local options.
> This happens because {{TaskSchedulerImpl}} offers resources to a 
> {{TaskSetManager}} one at a time, each time subject to a maxLocality 
> constraint.  However, that constraint doesn't move through all possible 
> locality levels -- it uses [{{tsm.myLocalityLevels}} 
> |https://github.com/apache/spark/blob/1a64388973711b4e567f25fa33d752066a018b49/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L360].
>   And {{tsm.myLocalityLevels}} [skips locality levels completely if the wait 
> == 0 | 
> https://github.com/apache/spark/blob/1a64388973711b4e567f25fa33d752066a018b49/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L953].
>   So with delay scheduling off, {{TaskSchedulerImpl}} immediately jumps to 
> giving tsms the offers with {{maxLocality = ANY}}.
> *WORKAROUND*: instead of setting {{spark.locality.wait=0}}, use 
> {{spark.locality.wait=1ms}}.  The one downside of this is if you have tasks 
> that actually take less than 1ms.  You could even run into SPARK-18886.  But 
> that is a relatively unlikely scenario for real workloads.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to