GitHub user skyluc opened a pull request: https://github.com/apache/spark/pull/11047
[SPARK-13002][Mesos] Send initial request of executors for dyn allocation Fix for [SPARK-13002](https://issues.apache.org/jira/browse/SPARK-13002) about the initial number of executors when running with dynamic allocation on Mesos. Instead of fixing it just for the Mesos case, made the change in `ExecutorAllocationManager`. It is already driving the number of executors running on Mesos, only no the initial value. The `None` and `Some(0)` are internal details on the computation of resources to reserved, in the Mesos backend scheduler. `executorLimitOption` has to be initialized correctly, otherwise the Mesos backend scheduler will, either, create to many executors at launch, or not create any executors and not be able to recover from this state. Removed the 'special case' description in the doc. It was not totally accurate, and is not needed anymore. This doesn't fix the same problem visible with Spark standalone. There is no straightforward way to send the initial value in standalone mode. Somebody knowing this part of the yarn support should review this change. You can merge this pull request into a Git repository by running: $ git pull https://github.com/skyluc/spark issue/initial-dyn-alloc-2 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/11047.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #11047 ---- commit 1c7594073267c8e0d4a58d7d4f6bd55df73d0316 Author: Luc Bourlier <luc.bourl...@typesafe.com> Date: 2016-01-22T14:42:21Z Send initial request of executors for dyn allocation ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org