vanzin commented on a change in pull request #25797: [SPARK-29043][Core] 
Improve the concurrent performance of History Server
URL: https://github.com/apache/spark/pull/25797#discussion_r347690131
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala
 ##########
 @@ -797,6 +819,43 @@ private[history] class FsHistoryProvider(conf: SparkConf, 
clock: Clock)
     }
   }
 
+  /**
+   * Check and delete specified event log according to the max log age defined 
by the user.
+   */
+  private def checkAndCleanLog(logPath: String): Unit = Utils.tryLog {
+    val maxTime = clock.getTimeMillis() - conf.get(MAX_LOG_AGE_S) * 1000
+    val expiredLog = Try {
+      val log = listing.read(classOf[LogInfo], logPath)
+      if (log.lastProcessed < maxTime) Some(log) else None
+    } match {
+      case Success(log) => log
+      case Failure(_: NoSuchElementException) => None
+      case Failure(e) => throw e
+    }
+
+    expiredLog.foreach { log =>
+      log.appId.foreach { appId =>
+        listing.view(classOf[ApplicationInfoWrapper])
 
 Review comment:
   I'm not following. Why do you need to iterate and filter by appId instead of 
just fetching the entry directly? The appId is the key.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to