[ 
https://issues.apache.org/jira/browse/SPARK-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-4751:
-----------------------------
    Target Version/s:   (was: 1.3.0)

> Support dynamic allocation for standalone mode
> ----------------------------------------------
>
>                 Key: SPARK-4751
>                 URL: https://issues.apache.org/jira/browse/SPARK-4751
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Andrew Or
>            Assignee: Andrew Or
>            Priority: Critical
>
> This is equivalent to SPARK-3822 but for standalone mode.
> This is actually a very tricky issue because the scheduling mechanism in the 
> standalone Master uses different semantics. In standalone mode we allocate 
> resources based on cores. By default, an application will grab all the cores 
> in the cluster unless "spark.cores.max" is specified. Unfortunately, this 
> means an application could get executors of different sizes (in terms of 
> cores) if:
> 1) App 1 kills an executor
> 2) App 2, with "spark.cores.max" set, grabs a subset of cores on a worker
> 3) App 1 requests an executor
> In this case, the new executor that App 1 gets back will be smaller than the 
> rest and can execute fewer tasks in parallel. Further, standalone mode is 
> subject to the constraint that only one executor can be allocated on each 
> worker per application. As a result, it is rather meaningless to request new 
> executors if the existing ones are already spread out across all nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to