Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4032#discussion_r23361754
  
    --- Diff: 
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceivedBlockTracker.scala
 ---
    @@ -106,6 +106,12 @@ private[streaming] class ReceivedBlockTracker(
           timeToAllocatedBlocks(batchTime) = allocatedBlocks
           lastAllocatedBatchTime = batchTime
           allocatedBlocks
    +    } else if (batchTime == lastAllocatedBatchTime) {
    +      // This situation occurs when WAL is ended with BatchAllocationEvent,
    +      // but without BatchCleanupEvent, possibly processed batch job or 
half-processed batch
    +      // job need to process again, so the batchTime will be equal to 
lastAllocatedBatchTime.
    +      // This situation will only occurs in recovery time.
    +      logWarning(s"Possibly processed batch $batchTime need to be 
processed again in WAL recovery")
         } else {
    --- End diff --
    
    Actually lets remove this exception completely. Instead any attempts to 
allocate blocks to a batch such that batch <= lastAllocatedBatchTime, is 
completely ignored.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to