HeartSaVioR commented on a change in pull request #27208: [SPARK-30481][CORE] 
Integrate event log compactor into Spark History Server
URL: https://github.com/apache/spark/pull/27208#discussion_r367185883
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala
 ##########
 @@ -661,26 +691,33 @@ private[history] class FsHistoryProvider(conf: 
SparkConf, clock: Clock)
       reader: EventLogFileReader,
       scanTime: Long,
       enableOptimizations: Boolean): Unit = {
+    val rootPath = reader.rootPath
     try {
+      val (shouldReload, lastCompactionIndex) = compact(reader)
 
 Review comment:
   I totally agree it's ideal to do so, but it made me thinking the problem too 
complicated because compaction "modifies" the event log files which "app 
listing" and "app rebuild" are reading or are having them as a list of files. 
   
   Actually I had to deal with similar thing in `loadDiskStore` and 
`createInMemoryStore` (there's retry mechanism), but I'm not sure it doesn't 
get more complicated if we also take app listing into account.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to