LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058030818
##########
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##########
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
store.close()
+ cleanUpStorePath()
+ }
+
+ private def cleanUpStorePath(): Unit = {
+ storePath.foreach { p =>
+ if (p.exists()) {
+ p.listFiles().foreach(Utils.deleteRecursively)
+ }
+ }
Review Comment:
Yes, abnormal termination will generate dangling files, users need to
manually cleanup in this scenario.
There's another problem I need to mention:
- Start spark-shell twice in the same directory with `spark.ui.store.path`
configured:
- The first app start successfully and use `spark.ui.store.path` to store
LiveUI data
- The second app will also start successfully, but `InMemoryStore` will
be used to store Live UI data due to `org.rocksdb.RocksDBException: While lock
file: /${baseDir}/listing.rdb/LOCK: Resource temporarily unavailable`
Should we solve this problem, such as creating a subdirectory under
`spark.ui.store.path` for each different App to ensure the isolation of rocksdb
usage? (However, if we do this, do we still need to add the `cleanup` config?
The app will use different directories after restart) @gengliangwang @mridulm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]