We have a Spark Structured Streaming job which runs out of disk quota after
some days.

The primary reason is there are bunch of empty folders that are getting
created in the /work/tmp directory. 

Any idea how to prune them?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to