Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5886#discussion_r29927125
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
    @@ -184,15 +183,14 @@ private[history] class FsHistoryProvider(conf: 
SparkConf, clock: Clock)
        */
       private[history] def checkForLogs(): Unit = {
         try {
    +      val newLastScanTime = getNewLastScanTime()
    --- End diff --
    
    So, doesn't this have the opposite 'problem', that a log file will get 
scanned twice even if it hasn't changed in some cases?
    
    If `lastScanTime` is 90, and `newLastScanTime` is 100, and a file is 
modified (once) at 101 (just after `newLastScanTime` is established) then it 
will be read twice. I'm just double-checking that this is fine if it happens 
only once in a while.
    
    Touching a file seems a little icky but I understand the logic. I can't 
think of something better that doesn't involve listing the dir again, 
processing files again frequently, or taking arbitrary guesses about how long 
the listing takes.
    
    This is worth it in the sense that the cost of missing an update in this 
very rare case is high? like a correctness issue? you'd miss a bit of history 
forever?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to