Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/20138#discussion_r161098356
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -544,73 +621,75 @@ private[history] class FsHistoryProvider(conf:
SparkConf, clock: Clock)
bus.addListener(listener)
replay(fileStatus, bus, eventsFilter = eventsFilter)
- listener.applicationInfo.foreach { app =>
- // Invalidate the existing UI for the reloaded app attempt, if any.
See LoadedAppUI for a
- // discussion on the UI lifecycle.
- synchronized {
- activeUIs.get((app.info.id,
app.attempts.head.info.attemptId)).foreach { ui =>
- ui.invalidate()
- ui.ui.store.close()
+ val (appId, attemptId) = listener.applicationInfo match {
+ case Some(app) =>
+ // Invalidate the existing UI for the reloaded app attempt, if
any. See LoadedAppUI for a
+ // discussion on the UI lifecycle.
+ synchronized {
+ activeUIs.get((app.info.id,
app.attempts.head.info.attemptId)).foreach { ui =>
+ ui.invalidate()
+ ui.ui.store.close()
+ }
}
- }
- addListing(app)
+ addListing(app)
+ (Some(app.info.id), app.attempts.head.info.attemptId)
+
+ case _ =>
+ (None, None)
--- End diff --
I think comment here explaining that writing an entry with no appId will
mark this log file as eligible for automatic recovery, if its still in that
state after max_log_age. (if I understood correctly)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]