I think the setting you are missing is 'spark.worker.cleanup.appDataTtl'.
This setting controls how long the age of a file has to be before it is
deleted. More info here:
https://spark.apache.org/docs/1.0.1/spark-standalone.html.

Also, 'spark.worker.cleanup.interval' you have configured is pretty
aggressive, at 10 seconds. Looking at the code, I would be willing to bet
you would be kicking off one of these cleanup threads in close proximity to
one another and if your system was under load, you could end up thrashing
your CPU. You may want to use something a little more reasonable like 30
minutes or an hour.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Clean-up-app-folders-in-worker-nodes-tp20889p21841.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to