Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21577#discussion_r196235040
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
    @@ -109,20 +116,21 @@ private[spark] class OutputCommitCoordinator(conf: 
SparkConf, isDriver: Boolean)
        * @param maxPartitionId the maximum partition id that could appear in 
this stage's tasks (i.e.
        *                       the maximum possible value of 
`context.partitionId`).
        */
    -  private[scheduler] def stageStart(stage: StageId, maxPartitionId: Int): 
Unit = synchronized {
    +  private[scheduler] def stageStart(stage: Int, maxPartitionId: Int): Unit 
= synchronized {
         stageStates(stage) = new StageState(maxPartitionId + 1)
    --- End diff --
    
    My memory is a bit rusty here, but are we changing the semantics of which 
task can commit here ?
    Couple of queries:
    * Are we allowing task from a previous stage attempt to commit for current 
stage attempt ?
    ** If yes, we should not overwrite `stageStates(stage)` if it exists.
    *** Based on `TaskIdentifier` above, I this yes ?
    ** If no, we should check and reject commit requests from tasks from 
'older' stage when the current stage attempt is different.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to