mridulm commented on code in PR #2609:
URL: https://github.com/apache/celeborn/pull/2609#discussion_r1671021414


##########
client-spark/spark-2/src/main/java/org/apache/spark/shuffle/celeborn/SparkUtils.java:
##########
@@ -136,6 +136,11 @@ public static int celebornShuffleId(
     }
   }
 
+  public static int getMapAttemptNumber(TaskContext context) {
+    assert (context.stageAttemptNumber() < (1 << 15) && 
context.attemptNumber() < (1 << 16));
+    return (context.stageAttemptNumber() << 16) | context.attemptNumber();
+  }
+

Review Comment:
   (When `celeborn.client.spark.fetch.throwsFetchFailure` is not enabled)
   We should not make this assumption as it is not enforced by spark.
   I do agree this is a ridiculously large number, but since spark does not put 
a bound on how large users can set these configs to; it can end up breaking 
(For life of me I cant imagine such a high value though ! )
   
   What I would suggest is to first enforce it in spark - since spark 4.0 is 
going on, this might be a good time to add this enforcement: and if it is 
accepted by the spark community, we can leverage it here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to