Github user ash211 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3349#discussion_r20548296
  
    --- Diff: docs/running-on-mesos.md ---
    @@ -183,6 +183,47 @@ node. Please refer to [Hadoop on 
Mesos](https://github.com/mesos/hadoop).
     In either case, HDFS runs separately from Hadoop MapReduce, without being 
scheduled through Mesos.
     
     
    +# Configuration
    +
    +See the [configuration page](configuration.html) for information on Spark 
configurations.  The following configs are specific for Spark on Mesos.
    +
    +#### Spark Properties
    +
    +<table class="table">
    +<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
    +<tr>
    +  <td><code>spark.mesos.coarse</code></td>
    +  <td>false</td>
    +  <td>
    +    Set the run mode for Spark on Mesos. For more information about the 
run mode, refer to #Mesos Run Mode section above.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.mesos.extra.cores</code></td>
    +  <td>0</td>
    +  <td>
    +    Set the extra amount of cpus to request per task.
    --- End diff --
    
    Is this setting for both coarse and fine grained modes?
    
    Also can you provide a formula that produces the total number of cores 
requested?  From what you have now I'm thinking something like:
    
    `totalCoresPerExecutor = numTasks + extraCores`
    
    This would be similar to the formula for memoryOverhead below


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to