dongjoon-hyun commented on a change in pull request #25333: [SPARK-28597][SS] 
Add config to retry spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#discussion_r313166704
 
 

 ##########
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecutionSuite.scala
 ##########
 @@ -68,4 +73,44 @@ class MicroBatchExecutionSuite extends StreamTest with 
BeforeAndAfter {
       CheckNewAnswer((25, 1), (30, 1))   // This should not throw the error 
reported in SPARK-24156
     )
   }
+
+  test("SPARK-28597: Add config to retry spark streaming's meta log when it 
met") {
+    val s = MemoryStream[Int]
+    val df = s.toDF()
+    // Specified checkpointLocation manually to init metadata file
+    val tmp =
+      new File(System.getProperty("java.io.tmpdir"), 
UUID.randomUUID().toString).getCanonicalPath
+    testStream(s.toDF())(
+      StartStream(checkpointLocation = tmp)
+    )
+
+    // fail with less retries
+    df.sparkSession.sessionState.conf.setConfString(
+      SQLConf.STREAMING_CHECKPOINT_FILE_MANAGER_CLASS.parent.key,
+      classOf[FakeFileSystemBasedCheckpointFileManager].getName)
+    df.sparkSession.sessionState.conf.setConfString(
+      SQLConf.STREAMING_META_DATA_NUM_RETRIES.key,
+      1.toString)
 
 Review comment:
   Ditto for `STREAMING_META_DATA_NUM_RETRIES`. In this test case, you are 
trying to reset with default value, but this test case can fail at line 94. 
Then, the other test case will use this `1` accidentally. That is the reason 
why we use `withSQLConf`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to