linzebing commented on a change in pull request #27223: 
[SPARK-30511][SPARK-28403][CORE] Don't treat failed/killed speculative tasks as 
pending in Spark scheduler
URL: https://github.com/apache/spark/pull/27223#discussion_r370526794
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
 ##########
 @@ -263,9 +263,15 @@ private[spark] class ExecutorAllocationManager(
    */
   private def maxNumExecutorsNeeded(): Int = {
     val numRunningOrPendingTasks = listener.totalPendingTasks + 
listener.totalRunningTasks
-    math.ceil(numRunningOrPendingTasks * executorAllocationRatio /
-              tasksPerExecutorForFullParallelism)
-      .toInt
+    val maxNeeded = math.ceil(numRunningOrPendingTasks * 
executorAllocationRatio /
+      tasksPerExecutorForFullParallelism).toInt
+    if (listener.pendingSpeculativeTasks > 0 && 
tasksPerExecutorForFullParallelism > 1) {
+      // If we have pending speculative tasks, allocate one more executor to 
satisfy the
+      // locality requirements of speculative tasks
+      maxNeeded + 1
 
 Review comment:
   Let's use your 1000 tasks (with 2 tasks/executor) example. The dynamic 
allocation scheduler currently doesn't track the underlying distribution of 
tasks on different executors. So it's totally possible that we have 999 tasks 
running, then on one executor we will have a single task running. And we have a 
pending speculative task for the same task index as this single task, then the 
speculative task can't be launched in this case. Doing a "+1" is a very simple 
way to address this situation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to