[ 
https://issues.apache.org/jira/browse/SPARK-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152621#comment-14152621
 ] 

Matt Cheah commented on SPARK-1860:
-----------------------------------

ExecutorRunner seems to have various cases corresponding to how the Executor 
exited. ExecutorRunner also creates the directory in fetchAndRunExecutor(). We 
can catch all of the exit cases there and delete the directory in any case.

In the case that the executor failed to exit, however, it would be best to 
preserve the logs. instead of blindly killing the whole directory.

On that note, one other thought is that perhaps we actually want to preserve 
the directory entirely upon crash since preserving the state will allow us to 
better understand what happened, i.e. what jars and files were present and so 
on.

> Standalone Worker cleanup should not clean up running executors
> ---------------------------------------------------------------
>
>                 Key: SPARK-1860
>                 URL: https://issues.apache.org/jira/browse/SPARK-1860
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.0.0
>            Reporter: Aaron Davidson
>            Priority: Blocker
>
> The default values of the standalone worker cleanup code cleanup all 
> application data every 7 days. This includes jars that were added to any 
> executors that happen to be running for longer than 7 days, hitting streaming 
> jobs especially hard.
> Executor's log/data folders should not be cleaned up if they're still 
> running. Until then, this behavior should not be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to