[ 
https://issues.apache.org/jira/browse/SPARK-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200785#comment-14200785
 ] 

Andrew Or commented on SPARK-4280:
----------------------------------

Hey [~sandyr] does this require the application to explicitly uncache the RDDs? 
My concern is that we cache some blocks behind the application's back (e.g. 
broadcast, streaming blocks) and we don't currently display these on the UI, in 
which case the application will never remove executors. It seems that some 
mechanism that would unconditionally blow away all the blocks on an executor 
will be handy.

> In dynamic allocation, add option to never kill executors with cached blocks
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-4280
>                 URL: https://issues.apache.org/jira/browse/SPARK-4280
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Sandy Ryza
>
> Even with the external shuffle service, this is useful in situations like 
> Hive on Spark where a query might require caching some data. We want to be 
> able to give back executors after the job ends, but not during the job if it 
> would delete intermediate results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to