HeartSaVioR commented on a change in pull request #25048: [SPARK-28247][SS] Fix 
flaky test "query without test harness" on ContinuousSuite 
URL: https://github.com/apache/spark/pull/25048#discussion_r300506026
 
 

 ##########
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala
 ##########
 @@ -45,7 +45,12 @@ class ContinuousSuiteBase extends StreamTest {
           case ContinuousScanExec(_, _, r: RateStreamContinuousStream, _) => r
         }.get
 
-        val deltaMs = numTriggers * 1000 + 300
+        // Adding 3s in case of slow initialization of partition reader - rows 
will be committed
+        // on epoch which they're written.
+        // Since previous epochs should be committed before to commit the 
epoch which output rows
+        // are written, slow initialization of partition reader and tiny 
trigger interval leads
+        // output rows to wait long time to be committed.
+        val deltaMs = numTriggers * 1000 + 3000
 
 Review comment:
   I'm not sure I can perfectly track when these rows are committed (the safest 
way), as we are checking this from driver side whereas things are delayed in 
executor side. 
   
   But given the main delay is occurred before first epoch being committed, 
instead of adding more gaps, we may be able to add 2s (which would be enough 
for 4 rows to be emitted in rate source) after waiting first epoch to be 
committed. I'll make a change.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to