[
https://issues.apache.org/jira/browse/SPARK-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nan Zhu reassigned SPARK-1706:
------------------------------
Assignee: Nan Zhu
> Allow multiple executors per worker in Standalone mode
> ------------------------------------------------------
>
> Key: SPARK-1706
> URL: https://issues.apache.org/jira/browse/SPARK-1706
> Project: Spark
> Issue Type: Improvement
> Components: Deploy
> Reporter: Patrick Wendell
> Assignee: Nan Zhu
> Fix For: 1.1.0
>
>
> Right now if people want to launch multiple executors on each machine they
> need to start multiple standalone workers. This is not too difficult, but it
> means you have extra JVM's sitting around.
> We should just allow users to set a number of cores they want per-executor in
> standalone mode and then allow packing multiple executors on each node. This
> would make standalone mode more consistent with YARN in the way you request
> resources.
> It's not too big of a change as far as I can see. You'd need to:
> 1. Introduce a configuration for how many cores you want per executor.
> 2. Change the scheduling logic in Master.scala to take this into account.
> 3. Change CoarseGrainedSchedulerBackend to not assume a 1<->1 correspondence
> between hosts and executors.
> And maybe modify a few other places.
--
This message was sent by Atlassian JIRA
(v6.2#6252)