Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/22849#discussion_r228785046
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -240,7 +240,7 @@ class CoarseGrainedSchedulerBackend(scheduler:
TaskSchedulerImpl, val rpcEnv: Rp
val taskDescs = CoarseGrainedSchedulerBackend.this.synchronized {
// Filter out executors under killing
val activeExecutors = executorDataMap.filterKeys(executorIsAlive)
- val workOffers = activeExecutors.map {
+ val workOffers = activeExecutors.filter(_._2.freeCores > 0).map {
--- End diff --
on our cluster there are many executors and tasksets/tasks. as we know
there is a round-robin manner to fill each node with tasks which will be
scheduled for each second("spark.scheduler.revive.interval", "1s"). it seams
make no sense to schedule tasks to executors which have no free cores.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]