[ https://issues.apache.org/jira/browse/SPARK-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15556775#comment-15556775 ]
holdenk commented on SPARK-7941: -------------------------------- Are you still experiencing this issue [~cqnguyen] or would it be ok for us to close this? > Cache Cleanup Failure when job is killed by Spark > -------------------------------------------------- > > Key: SPARK-7941 > URL: https://issues.apache.org/jira/browse/SPARK-7941 > Project: Spark > Issue Type: Bug > Components: PySpark, YARN > Affects Versions: 1.3.1 > Reporter: Cory Nguyen > Attachments: screenshot-1.png > > > Problem/Bug: > If a job is running and Spark kills the job intentionally, the cache files > remains on the local/worker nodes and are not cleaned up properly. Over time > the old cache builds up and causes "No Space Left on Device" error. > The cache is cleaned up properly when the job succeeds. I have not verified > if the cached remains when the user intentionally kills the job. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org