Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21390#discussion_r189813626
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -97,6 +97,10 @@ private[deploy] class Worker(
private val APP_DATA_RETENTION_SECONDS =
conf.getLong("spark.worker.cleanup.appDataTtl", 7 * 24 * 3600)
+ // Whether or not cleanup the non-shuffle files on executor finishes.
+ private val CLEANUP_NON_SHUFFLE_FILES_ENABLED =
+ conf.getBoolean("spark.worker.cleanup.nonShuffleFiles.enabled", true)
--- End diff --
Is there potential confusion from the fact that
`spark.worker.cleanup.nonShuffleFiles.enabled`'s effects are not controlled by
`spark.worker.cleanup.enabled`? Should they be? The
`spark.worker.cleanup.enabled` configuration only seems to be dealing with the
cleanup of completed applications' directories, whereas this is dealing with
cleanup from completed executors whose application continues running (likely
with new executors).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]