Hi, I am using Mesos 0.22.1 with Spark 1.3.1-hadoop2.4.
I am submitting 9 spark jobs every hour with spark-submit sequentially and they are being run by mesos as frameworks.
The challenge is that my own application containing spark jobs (jar file) is 160MB, spark-1.3.1-bin-hadoop2.4.tgz executor is 241 MB and unpacked spark is another 200 MB.
Thus every hour a terminated framework Sandboxes build up worth of 9 * (160 + 240 + 200) = 5GB!
I am running out of the disk space every night and am trying to somehow garbage collect those large jar and tgz files but KEEP the log files (stderr, stdout).
Is it possible to somehow selectively garbage collect files stored in terminated frameworks?
thx reinis

