Github user windkit commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19555#discussion_r146272708
  
    --- Diff: docs/running-on-mesos.md ---
    @@ -196,17 +196,18 @@ configuration variables:
     
     * Executor memory: `spark.executor.memory`
     * Executor cores: `spark.executor.cores`
    -* Number of executors: `spark.cores.max`/`spark.executor.cores`
    +* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 
    
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
     
     Please see the [Spark Configuration](configuration.html) page for
     details and default values.
     
     Executors are brought up eagerly when the application starts, until
    -`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
    -Spark application will reserve all resources offered to it by Mesos,
    -so we of course urge you to set this variable in any sort of
    -multi-tenant cluster, including one which runs multiple concurrent
    -Spark applications.
    +`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
    --- End diff --
    
    @ArtRand Sure, I will move the documentation to 19510


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to