Luc Bourlier created SPARK-13002:
------------------------------------

             Summary: Mesos scheduler backend does not follow the property 
spark.dynamicAllocation.initialExecutors
                 Key: SPARK-13002
                 URL: https://issues.apache.org/jira/browse/SPARK-13002
             Project: Spark
          Issue Type: Bug
          Components: Mesos
    Affects Versions: 1.6.0, 1.5.2
            Reporter: Luc Bourlier


When starting a Spark job on a Mesos cluster, all available cores are reserved 
(up to {{spark.cores.max}}), creating one executor per Mesos node, and as many 
executors as needed.
This is the case even when dynamic allocation is enabled.

When dynamic allocation is enabled, the number of executor launched at startup 
should be limited to the value of {{spark.dynamicAllocation.initialExecutors}}.

The Mesos scheduler backend already follows the value computed by the 
{{ExecutorAllocationManager}} for the number of executors that should be up and 
running. Expect at startup, when it just creates all the executors it can.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to