I am using spark 1.1 with the ooyala job server (which basically creates long running spark jobs as contexts to execute jobs in). These contexts have cached RDDs in memory (via RDD.persist()).
I want to enable the spark.cleaner to cleanup the /spark/work directories that are created for each app, but not touch cached RDDs like so: spark.worker.cleanup.enabled = true spark.worker.cleanup.interval = 1800 spark.worker.cleanup.appDataTtl = 604800 #7 days 2 questions here - Will these settings affect cleanup of cached RDDs? (because I want those to be persisted forever) Is there a way to force the cleaner to run, and how can I see when the cleaner is run? After settings these options, I still see the data for apps older than 7 days on the worker nodes. Why is that happening? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/spark-cleaner-questions-tp21128.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org