Github user tnachen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4027#discussion_r26987588
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
 ---
    @@ -63,20 +63,25 @@ private[spark] class CoarseMesosSchedulerBackend(
       // Maximum number of cores to acquire (TODO: we'll need more flexible 
controls here)
       val maxCores = conf.get("spark.cores.max",  Int.MaxValue.toString).toInt
     
    +  val maxExecutorsPerSlave = 
conf.getInt("spark.mesos.coarse.executors.max", 1)
    +  val maxCpusPerExecutor = conf.getInt("spark.mesos.coarse.cores.max", 
Int.MaxValue)
    --- End diff --
    
    It's quite hard to differentiate between spark.cores.max since spark.cores 
itself is already very vague, where it's a configuration to set the total 
number of cores a Spark app can schedule.
    spark.mesos.coarse.cores.max is the Maximum cores a coarse grained Spark 
executor can take up to, and the scheduler will schedule any cores between 1 to 
spark.mesos.coarse.cores.max. 
    
    So calling it coresPerExecutor doesn't seem right as it's not a hard value 
that the scheduler tries to schedule.
    
    How about spark.mesos.coarse.coresPerExecutor.max?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to