Github user c-horn commented on the issue:

    https://github.com/apache/spark/pull/21676
  
    Changing `OneTimeExecutor` like this resolves this issue:
    ```
    case class OneTimeExecutor() extends TriggerExecutor {
    
      /**
       * Execute a single batch using `batchRunner`.
       */
    -  override def execute(batchRunner: () => Boolean): Unit = batchRunner()
    +  override def execute(batchRunner: () => Boolean): Unit = batchRunner() 
&& batchRunner()
    }
    ```
    ... but the type becomes semantically incorrect.
    
    Is this an acceptable solution? it appears that a lot of the 
`MicroBatchExecution` code makes assumptions about state from the previous 
batch, which may or may not be realized in the first iteration of a stream 
restart.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to