Github user victor-wong commented on the issue:

    https://github.com/apache/spark/pull/19824
  
    @CodingCat please checkout the difference between the two PR.
    
    `
         if (jobSet.hasCompleted) {
     -      jobSets.remove(jobSet.time)
     -      jobGenerator.onBatchCompletion(jobSet.time)
     -      logInfo("Total delay: %.3f s for time %s (execution: %.3f 
s)".format(
     -        jobSet.totalDelay / 1000.0, jobSet.time.toString,
     -        jobSet.processingDelay / 1000.0
     -      ))
            
listenerBus.post(StreamingListenerBatchCompleted(jobSet.toBatchInfo))
          }
    `
    As shown above, in In https://github.com/apache/spark/pull/16542, if a Job 
failed, listenerBus still post a StreamingListenerBatchCompleted, which I 
believe to be incorrect, because the Batch is not completed (a Job of it has 
failed).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to