shuyouZZ commented on code in PR #38983:
URL: https://github.com/apache/spark/pull/38983#discussion_r1049831087
##########
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala:
##########
@@ -535,11 +535,17 @@ private[history] class FsHistoryProvider(conf: SparkConf,
clock: Clock)
}
} catch {
case _: NoSuchElementException =>
- // If the file is currently not being tracked by the SHS, add an
entry for it and try
- // to parse it. This will allow the cleaner code to detect the
file as stale later on
- // if it was not possible to parse it.
+ // If the file is currently not being tracked by the SHS, check
whether the log file
+ // has expired, if expired, delete it from log dir, if not, add
an entry for it and
+ // try to parse it. This will allow the cleaner code to detect
the file as stale
+ // later on if it was not possible to parse it.
try {
- if (count < conf.get(UPDATE_BATCHSIZE)) {
+ if (conf.get(CLEANER_ENABLED) && reader.modificationTime <
+ clock.getTimeMillis() - conf.get(MAX_LOG_AGE_S) * 1000) {
+ logInfo(s"Deleting expired event log
${reader.rootPath.toString}")
+ deleteLog(fs, reader.rootPath)
+ false
+ } else if (count < conf.get(UPDATE_BATCHSIZE)) {
Review Comment:
> When the exception is thrown, we should also cleanup state from listing db
(if the first read had succeeded) - the exception could be thrown for either of
the two reads.
Do you mean this? `val appInfo =
listing.read(classOf[ApplicationInfoWrapper], info.appId.get)` may also
throw`NoSuchElementException`, and then need to cleanup the log info from
listing db.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]