pgandhi999 commented on a change in pull request #23871:
[SPARK-23433][SPARK-25250] [CORE] Later created TaskSet should learn about the
finished partitions
URL: https://github.com/apache/spark/pull/23871#discussion_r260792411
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
##########
@@ -292,13 +298,16 @@ private[spark] class TaskSchedulerImpl(
* given TaskSetManager have completed, so state associated with the
TaskSetManager should be
* cleaned up.
*/
- def taskSetFinished(manager: TaskSetManager): Unit = synchronized {
+ def taskSetFinished(manager: TaskSetManager, success: Boolean): Unit =
synchronized {
taskSetsByStageIdAndAttempt.get(manager.taskSet.stageId).foreach {
taskSetsForStage =>
taskSetsForStage -= manager.taskSet.stageAttemptId
if (taskSetsForStage.isEmpty) {
taskSetsByStageIdAndAttempt -= manager.taskSet.stageId
}
}
+ if (success) {
+ stageIdToFinishedPartitions -= manager.taskSet.stageId
Review comment:
Rest all LGTM, had just one question, it it better to remove finished
partitions from DAGScheduler Event loop on stage completion to avoid potential
race(Suppose one stage attempt is fully complete but the other one is still
running till the time it gets killed). Just thinking out loud.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]