squito commented on a change in pull request #22806: [SPARK-25250][CORE] : Late 
zombie task completions handled correctly even before new taskset launched
URL: https://github.com/apache/spark/pull/22806#discussion_r247727379
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
 ##########
 @@ -1427,6 +1428,7 @@ private[spark] class DAGScheduler(
             val status = event.result.asInstanceOf[MapStatus]
             val execId = status.location.executorId
             logDebug("ShuffleMapTask finished on " + execId)
+            taskScheduler.completeTasks(task.partitionId, task.stageId, false)
 
 Review comment:
   sorry to be late to respond here, have been traveling.  So this question has 
come up a lot, and while there are reasons to do it, there are some 
complications as well, and I don't think we should roll that change into this 
PR, which is trying to solve a different bug.   In short, it has been argued in 
the past that a shuffle map task may still make useful progress on other tasks. 
 There are also complications with handling tasks that dont' respond well to 
killing (I think hadoop input readers?)  To be honest, I feel like there is a 
stronger argument in favor of doing the killing now, though we'd probably want 
it behind a conf.  So I'd be a +1 for the change, just that it shoudl be 
separate.  (And I'm probably not recalling all of the gotchas with killing 
tasks at the moment, so maybe with a dedicated discussion on this, we can 
dredge up all the cases we need to think through.)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to