srowen commented on a change in pull request #22806: [SPARK-25250][CORE] : On 
successful completion of a task attempt on a parti…
URL: https://github.com/apache/spark/pull/22806#discussion_r244403945
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
 ##########
 @@ -286,6 +286,29 @@ private[spark] class TaskSchedulerImpl(
     }
   }
 
+  /**
+   * SPARK-25250: Whenever any Result Task gets successfully completed, we 
simply mark the
+   * corresponding partition id as completed in all attempts for that 
particular stage. As a
+   * result, we do not see any Killed tasks due to TaskCommitDenied Exceptions 
showing up
+   * in the UI.
+   */
+  override def markPartitionIdAsCompletedAndKillCorrespondingTaskAttempts(
 
 Review comment:
    Doesn't this logic overlap with `killAllTaskAttempts`? should it reuse that 
logic? I understand it does something a little different, and I don't know this 
code well, but seems like there are related but separate implementations of 
something similar here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to