Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4149#discussion_r23347144
  
    --- Diff: 
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
 ---
    @@ -119,9 +118,17 @@ class ReceiverTracker(ssc: StreamingContext, 
skipReceiverLaunch: Boolean = false
         }
       }
     
    -    /** Clean up metadata older than the given threshold time */
    -  def cleanupOldMetadata(cleanupThreshTime: Time) {
    +  /** Clean up the data and metadata of old blocks and batches */
    --- End diff --
    
    It would be good if this explained what happens for blocks which are 
assigned _exactly_ to the thresholdTime. The old doc seemed to imply that these 
would be spared (i.e. the block needs to be strictly older). It would be good 
to clarify this in the new one also. Actually throughout this entire call chain 
related to cleaning old things, it would be good to clarify in the docs what 
the boundary behavior is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to