jiang13021 commented on code in PR #2609:
URL: https://github.com/apache/celeborn/pull/2609#discussion_r1713146045


##########
client-spark/spark-2/src/main/java/org/apache/spark/shuffle/celeborn/SparkShuffleManager.java:
##########
@@ -66,6 +67,26 @@ public class SparkShuffleManager implements ShuffleManager {
   private ExecutorShuffleIdTracker shuffleIdTracker = new 
ExecutorShuffleIdTracker();
 
   public SparkShuffleManager(SparkConf conf, boolean isDriver) {
+    int maxStageAttempts =
+        conf.getInt(
+            "spark.stage.maxConsecutiveAttempts",
+            DAGScheduler.DEFAULT_MAX_CONSECUTIVE_STAGE_ATTEMPTS());
+    int maxTaskAttempts = (Integer) 
conf.get(package$.MODULE$.MAX_TASK_FAILURES());

Review Comment:
   In Spark 2, the configuration parameter `spark.task.maxFailures` is referred 
to as `MAX_TASK_FAILURES`, while in Spark 3, it is renamed to 
`TASK_MAX_FAILURES`. As a result, we cannot utilize the same 
`validateAttemptConfig` for both Spark 2 and Spark 3.  I have added a 
SparkCommonUtils class that includes the methods `validateMaxAttempts` and 
`getEncodedAttemptNumber`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to