attilapiros commented on a change in pull request #28619:
URL: https://github.com/apache/spark/pull/28619#discussion_r440107393
##########
File path: core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala
##########
@@ -1042,7 +1046,19 @@ private[spark] class TaskSetManager(
// bound based on that.
logDebug("Task length threshold for speculation: " + threshold)
for (tid <- runningTasksSet) {
- foundTasks |= checkAndSubmitSpeculatableTask(tid, time, threshold)
+ var speculated = checkAndSubmitSpeculatableTask(tid, time, threshold)
+ if (!speculated && tidToExecutorKillTimeMapping.contains(tid)) {
Review comment:
I am just curious why this solution (introducing the kill time per task
ID map: `tidToExecutorKillTimeMapping`) is chosen over storing the kill time
per executor (`executorToKillTimeMapping` or something like that). In the later
case here would be something like:
```scala
if (!speculated && executorToKillTimeMapping.nonEmpty) {
val taskInfo = taskInfos(tid)
executorToKillTimeMapping.get(taskInfo.executorId).foreach {
executorKillTime =>
...
}
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]