linzebing commented on a change in pull request #27223:
[SPARK-30511][SPARK-28403][CORE] Don't treat failed/killed speculative tasks as
pending in Spark scheduler
URL: https://github.com/apache/spark/pull/27223#discussion_r370973985
##########
File path: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
##########
@@ -263,9 +263,15 @@ private[spark] class ExecutorAllocationManager(
*/
private def maxNumExecutorsNeeded(): Int = {
val numRunningOrPendingTasks = listener.totalPendingTasks +
listener.totalRunningTasks
- math.ceil(numRunningOrPendingTasks * executorAllocationRatio /
- tasksPerExecutorForFullParallelism)
- .toInt
+ val maxNeeded = math.ceil(numRunningOrPendingTasks *
executorAllocationRatio /
+ tasksPerExecutorForFullParallelism).toInt
+ if (listener.pendingSpeculativeTasks > 0 &&
tasksPerExecutorForFullParallelism > 1) {
+ // If we have pending speculative tasks, allocate one more executor to
satisfy the
+ // locality requirements of speculative tasks
+ maxNeeded + 1
Review comment:
Thanks! You have made excellent points: the +1 gives it a better chance but
not a guarantee, and the slow hosts are more likely to be loaded up. I'm OK
with doing a +1 only when maxNeeded = 1, will address it.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]