cloud-fan commented on a change in pull request #23871:
[SPARK-23433][SPARK-25250] [CORE] Later created TaskSet should learn about the
finished partitions
URL: https://github.com/apache/spark/pull/23871#discussion_r261176256
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
##########
@@ -837,19 +846,31 @@ private[spark] class TaskSchedulerImpl(
}
/**
- * Marks the task has completed in all TaskSetManagers for the given stage.
+ * Marks the task has completed in all TaskSetManagers(active / zombie) for
the given stage.
*
* After stage failure and retry, there may be multiple TaskSetManagers for
the stage.
* If an earlier attempt of a stage completes a task, we should ensure that
the later attempts
* do not also submit those same tasks. That also means that a task
completion from an earlier
* attempt can lead to the entire stage getting marked as successful.
+ * And there is also the possibility that the DAGScheduler submits another
taskset at the same
+ * time as we're marking a task completed here -- that taskset would have a
task for a partition
+ * that was already completed. We maintain the set of finished partitions in
+ * stageIdToFinishedPartitions, protected by this, so we can detect those
tasks when the taskset
+ * is submitted. See SPARK-25250 for more details.
+ *
+ * note: this method must be called with a lock on this.
*/
private[scheduler] def markPartitionCompletedInAllTaskSets(
stageId: Int,
partitionId: Int,
taskInfo: TaskInfo) = {
+ // if we do not find a BitSet for this stage, which means an active
TaskSetManager
+ // has already succeeded and removed the stage.
Review comment:
when can this happen? Or it's just a safe guard?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]