Github user windkit commented on a diff in the pull request:
https://github.com/apache/spark/pull/19510#discussion_r145890559
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
private val MAX_SLAVE_FAILURES = 2
private val maxCoresOption =
conf.getOption("spark.cores.max").map(_.toInt)
+ private val maxMemOption =
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --
@skonto
For cpus, I think we can compare with minCoresPerExecutor
For mem, calling the MesosSchedulerUtils.executorMemory to get the minimum
requirement.
Then at here, we parse the option, check the minimum and if it is too
small, throw exception?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]