Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2471#discussion_r18935441
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -46,8 +58,9 @@ private[history] class FsHistoryProvider(conf: SparkConf)
extends ApplicationHis
private val fs = Utils.getHadoopFileSystem(resolvedLogDir,
SparkHadoopUtil.get.newConfiguration(conf))
- // A timestamp of when the disk was last accessed to check for log
updates
- private var lastLogCheckTimeMs = -1L
+ // The schedule thread pool size must be one,otherwise it will have
concurrent issues about fs
+ // and applications between check task and clean task..
+ private val pool = Executors.newScheduledThreadPool(1)
--- End diff --
Ah, another thing: you should override `stop()` and shut down this executor
cleanly (it's mostly a "best effort" thing, but still).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]