[
https://issues.apache.org/jira/browse/SPARK-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092445#comment-15092445
]
Jean-Baptiste Onofré commented on SPARK-12430:
----------------------------------------------
I think it's related to this commit:
{code}
52f5754 Marcelo Vanzin on 1/21/15 at 11:38 PM (committed by Josh Rosen on
2/2/15 at 11:01 PM)
Make sure only owner can read / write to directories created for the job.
Whenever a directory is created by the utility method, immediately restrict
its permissions so that only the owner has access to its contents.
Signed-off-by: Josh Rosen <[email protected]>
{code}
As it can be checked with the extras/java8-test, I will verify.
Sorry for the delay, I keep you posted.
> Temporary folders do not get deleted after Task completes causing problems
> with disk space.
> -------------------------------------------------------------------------------------------
>
> Key: SPARK-12430
> URL: https://issues.apache.org/jira/browse/SPARK-12430
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.5.1, 1.5.2
> Environment: Ubuntu server
> Reporter: Fede Bar
>
> We are experiencing an issue with automatic /tmp folder deletion after
> framework completes. Completing a M/R job using Spark 1.5.2 (same behavior as
> Spark 1.5.1) over Mesos will not delete some temporary folders causing free
> disk space on server to exhaust.
> Behavior of M/R job using Spark 1.4.1 over Mesos cluster:
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/slaves/id#* , */tmp/spark-#/* ,
> */tmp/spark-#/blockmgr-#*
> - When task is completed */tmp/spark-#/* gets deleted along with
> */tmp/spark-#/blockmgr-#* sub-folder.
> Behavior of M/R job using Spark 1.5.2 over Mesos cluster (same identical job):
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/mesos/slaves/id** * ,
> */tmp/spark-***/ * ,{color:red} /tmp/blockmgr-***{color}
> - When task is completed */tmp/spark-***/ * gets deleted but NOT shuffle
> container folder {color:red} /tmp/blockmgr-***{color}
> Unfortunately, {color:red} /tmp/blockmgr-***{color} can account for several
> GB depending on the job that ran. Over time this causes disk space to become
> full with consequences that we all know.
> Running a shell script would probably work but it is difficult to identify
> folders in use by a running M/R or stale folders. I did notice similar issues
> opened by other users marked as "resolved", but none seems to exactly match
> the above behavior.
> I really hope someone has insights on how to fix it.
> Thank you very much!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]