jiang13021 commented on code in PR #2609:
URL: https://github.com/apache/celeborn/pull/2609#discussion_r1672416520


##########
client-spark/spark-2/src/main/java/org/apache/spark/shuffle/celeborn/SparkUtils.java:
##########
@@ -136,6 +136,11 @@ public static int celebornShuffleId(
     }
   }
 
+  public static int getMapAttemptNumber(TaskContext context) {
+    assert (context.stageAttemptNumber() < (1 << 15) && 
context.attemptNumber() < (1 << 16));
+    return (context.stageAttemptNumber() << 16) | context.attemptNumber();
+  }
+

Review Comment:
   To reproduce this problem, the following conditions must be met:
   1. Resubmit stage and reuse celeborn shuffle id
   2. There are at least two valid MapperEnds with same shuffleId, mapId, and 
taskAttemtNumber
   
   Normally, these two conditions cannot be met at the same time. Because if 
stage is INDETERMINATE, celeborn won't reuse celeborn shuffle id, if stage is 
non-INDETERMINATE, DAGScheduler will only submit missing tasks which means no 
duplicate task will be submit. But resubmitting a non-INDETERMINATE barrier 
stage can meet there two conditions at the same time. 
   
   Even worse, when the barrier stage is UNORDERED, shuffle read will find two 
data blocks with the same shuffleId, mapId, taskAttemptNumber, and batchId. 
According to the deduplication rule, celeborn will only read one of the batches 
and think that the other is just a duplicate batch, but in fact the two batches 
may be different, resulting in data correctness issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to