Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7600#discussion_r35688829
  
    --- Diff: 
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
    @@ -391,6 +393,32 @@ class CheckpointSuite extends TestSuiteBase {
         testCheckpointedOperation(input, operation, output, 7)
       }
     
    +  test("recovery maintains rate controller") {
    +    ssc = new StreamingContext(conf, batchDuration)
    +    ssc.checkpoint(checkpointDir)
    +
    +    val dstream = new RateLimitInputDStream(ssc) {
    +      override val rateController =
    +        Some(new ReceiverRateController(id, new ConstantEstimator(200.0)))
    +    }
    +    SingletonDummyReceiver.reset()
    +
    +    val output = new 
TestOutputStreamWithPartitions(dstream.checkpoint(batchDuration * 2))
    +    output.register()
    +    runStreams(ssc, 5, 5)
    +
    +    SingletonDummyReceiver.reset()
    +    ssc = new StreamingContext(checkpointDir)
    +    ssc.start()
    +    val outputNew = advanceTimeWithRealDelay(ssc, 2)
    --- End diff --
    
    That is because you were running manual clock through the conf generated by 
TestSuiteBase. When on manual clock, unless you manually add time to the clock, 
batches will not be generated. If you run on a normal clock, and use eventually 
sufficiently large timeout, batches will complete.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to