[
https://issues.apache.org/jira/browse/SPARK-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15095075#comment-15095075
]
Fede Bar commented on SPARK-12430:
----------------------------------
Hi Jean-Baptiste,
Thank for following up. I did not use MESOS_DIRECTORY nor
spark.shuffle.service.enabled. Please find some config parameters that have
been passed in the tests ( spark.mesos.role not passed in 1.4.1 as not
supported):
########## spark-env.sh ##########
export SPARK_SCALA_VERSION
export PYSPARK_PYTHON
export MESOS_NATIVE_LIBRARY
export SPARK_EXECUTOR_URI
export MASTER
export SPARK_LOG_DIR
########## spark-env.sh ##########
spark.master
spark.executor.uri
spark.eventLog.enabled
spark.serializer
spark.eventLog.dir
spark.eventLog.enabled
spark.eventLog.compress
spark.mesos.coarse
spark.streaming.blockInterval
spark.driver.memory
spark.executor.memory
spark.executor.extraJavaOptions
########## spark-submit to run the job ##########
spark.mesos.role
spark.cores.max
spark.driver.memory
spark.driver.maxResultSize
spark.executor.memory
spark.locality.wait.process
spark.locality.wait.node
spark.scheduler.allocation.file
spark.scheduler.mode
spark.ui.port
spark.akka.frameSize
spark.storage.memoryFraction
spark.executor.extraJavaOptions
--driver-java-options "Various GC settings here"
/path/to/jar/file.jar
> Temporary folders do not get deleted after Task completes causing problems
> with disk space.
> -------------------------------------------------------------------------------------------
>
> Key: SPARK-12430
> URL: https://issues.apache.org/jira/browse/SPARK-12430
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.5.1, 1.5.2
> Environment: Ubuntu server
> Reporter: Fede Bar
>
> We are experiencing an issue with automatic /tmp folder deletion after
> framework completes. Completing a M/R job using Spark 1.5.2 (same behavior as
> Spark 1.5.1) over Mesos will not delete some temporary folders causing free
> disk space on server to exhaust.
> Behavior of M/R job using Spark 1.4.1 over Mesos cluster:
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/slaves/id#* , */tmp/spark-#/* ,
> */tmp/spark-#/blockmgr-#*
> - When task is completed */tmp/spark-#/* gets deleted along with
> */tmp/spark-#/blockmgr-#* sub-folder.
> Behavior of M/R job using Spark 1.5.2 over Mesos cluster (same identical job):
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/mesos/slaves/id** * ,
> */tmp/spark-***/ * ,{color:red} /tmp/blockmgr-***{color}
> - When task is completed */tmp/spark-***/ * gets deleted but NOT shuffle
> container folder {color:red} /tmp/blockmgr-***{color}
> Unfortunately, {color:red} /tmp/blockmgr-***{color} can account for several
> GB depending on the job that ran. Over time this causes disk space to become
> full with consequences that we all know.
> Running a shell script would probably work but it is difficult to identify
> folders in use by a running M/R or stale folders. I did notice similar issues
> opened by other users marked as "resolved", but none seems to exactly match
> the above behavior.
> I really hope someone has insights on how to fix it.
> Thank you very much!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]