Github user windkit commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146272841
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
* Executor memory: `spark.executor.memory`
* Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`,
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
Please see the [Spark Configuration](configuration.html) page for
details and default values.
Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached. If you don't set
+`spark.cores.max` and `spark.mem.max`, the Spark application will
+reserve all resources offered to it by Mesos, so we of course urge
--- End diff --
Agree. I will update it later on
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]