Github user skonto commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19510#discussion_r145389586
  
    --- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
    @@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
       private val MAX_SLAVE_FAILURES = 2
     
       private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
    +  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
    --- End diff --
    
    Can we defend against minimum values? For example default executor memory 
is 1.4MB. We could calculate the value returned by 
MesosSchedulerUtils.executorMemory. I don't think these values calculated in 
canLaunchTask ever change.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to