[ https://issues.apache.org/jira/browse/SPARK-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564376#comment-14564376 ]
Cory Nguyen commented on SPARK-7941: ------------------------------------ The location in /mnt or /mnt1 => /mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache > Cache Cleanup Failure when job is killed by Spark > -------------------------------------------------- > > Key: SPARK-7941 > URL: https://issues.apache.org/jira/browse/SPARK-7941 > Project: Spark > Issue Type: Bug > Components: PySpark, YARN > Affects Versions: 1.3.1 > Reporter: Cory Nguyen > > Problem/Bug: > If a job is running and Spark kills the job intentionally, the cache files > remains on the local/worker nodes and are not cleaned up properly. Over time > the old cache builds up and causes "No Space Left on Device" error. > The cache is cleaned up properly when the job succeeds. I have not verified > if the cached remains when the user intentionally kills the job. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org