Github user zuotingbing commented on a diff in the pull request: https://github.com/apache/spark/pull/22849#discussion_r229203395 --- Diff: core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala --- @@ -240,7 +240,7 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, val rpcEnv: Rp val taskDescs = CoarseGrainedSchedulerBackend.this.synchronized { // Filter out executors under killing val activeExecutors = executorDataMap.filterKeys(executorIsAlive) - val workOffers = activeExecutors.map { + val workOffers = activeExecutors.filter(_._2.freeCores > 0).map { --- End diff -- BTW, if freeCores < CPUS_PER_TASK, the code as fellow in resourceOffers() is inefficient since `o.cores / CPUS_PER_TASK` = 0 `val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores / CPUS_PER_TASK)) val availableSlots = shuffledOffers.map(o => o.cores / CPUS_PER_TASK).sum`
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org