[ 
https://issues.apache.org/jira/browse/SPARK-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15500078#comment-15500078
 ] 

Sun Rui commented on SPARK-17522:
---------------------------------

yes. It can be tuned by a config option according to the wordload. that's why 
the experimental code reads the conf "spark.deploy.spreadOut"

> [MESOS] More even distribution of executors on Mesos cluster
> ------------------------------------------------------------
>
>                 Key: SPARK-17522
>                 URL: https://issues.apache.org/jira/browse/SPARK-17522
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>    Affects Versions: 2.0.0
>            Reporter: Sun Rui
>
> The MesosCoarseGrainedSchedulerBackend launch executors in a round-robin way 
> among accepted offers that are received at once, but it is observed that 
> typically executors are launched on a small number of slaves.
> It is found that MesosCoarseGrainedSchedulerBackend mostly is receiving only 
> one offer once on a cluster composed of many nodes, so that the round-robin 
> assignment of executors among offers do not have expected result, which leads 
> to the fact that executors are located on a smaller number of slave nodes 
> than expected, which suffers bad data locality.
> An experimental slight change to 
> MesosCoarseGrainedSchedulerBackend::buildMesosTasks() shows better executor 
> distribution among nodes:
> {code}
>     while (launchTasks) {
>       launchTasks = false
>       for (offer <- offers) {
>         ...
>        }
> +      if (conf.getBoolean("spark.deploy.spreadOut", true)) {
> +        launchTasks = false
> +      }
>     }
>     tasks.toMap
> {code}
> One of my spark programs can run 30% faster due to this change because of 
> better data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to