dongjoon-hyun commented on code in PR #48776:
URL: https://github.com/apache/spark/pull/48776#discussion_r1831459391


##########
core/src/test/scala/org/apache/spark/SparkContextSuite.scala:
##########
@@ -1423,6 +1423,41 @@ class SparkContextSuite extends SparkFunSuite with 
LocalSparkContext with Eventu
     sc = new SparkContext(conf)
     sc.stop()
   }
+
+  test("SPARK-50247: BLOCK_MANAGER_REREGISTRATION_FAILED should be counted as 
network failure") {

Review Comment:
   For the record, this test case has the same structure with the existing 
`HEARTBEAT_FAILURE` test case.



##########
core/src/test/scala/org/apache/spark/SparkContextSuite.scala:
##########
@@ -1423,6 +1423,41 @@ class SparkContextSuite extends SparkFunSuite with 
LocalSparkContext with Eventu
     sc = new SparkContext(conf)
     sc.stop()
   }
+
+  test("SPARK-50247: BLOCK_MANAGER_REREGISTRATION_FAILED should be counted as 
network failure") {
+    val conf = new SparkConf().set(TASK_MAX_FAILURES, 1)
+    val sc = new SparkContext("local-cluster[1, 1, 1024]", "test-exit-code", 
conf)
+    val result = sc.parallelize(1 to 10, 1).map { x =>
+      val context = org.apache.spark.TaskContext.get()
+      if (context.taskAttemptId() == 0) {
+        System.exit(ExecutorExitCode.BLOCK_MANAGER_REREGISTRATION_FAILED)
+      } else {
+        x
+      }
+    }.count()
+    assert(result == 10L)
+    sc.stop()
+  }
+
+  test("SPARK-50247: BLOCK_MANAGER_REREGISTRATION_FAILED will be counted as 
task failure when " +

Review Comment:
   For the record, this test case has the same structure with the existing 
`HEARTBEAT_FAILURE` test case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to