[ 
https://issues.apache.org/jira/browse/SPARK-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14633734#comment-14633734
 ] 

Sean Owen commented on SPARK-9197:
----------------------------------

Yes, that must necessarily happen, or else in general you couldn't decommission 
executors at all. Is this just a question? I think there are some discussions 
about whether this should be taken into account (i.e. should pick an executor 
with no cached partitions if possible). I don't think it's worth spending the 
complexity and I/O to save cached partitions given that they can be re-created 
if needed -- and they may not ever be needed again.

> Cached RDD partitions are lost when executors are dynamically deallocated
> -------------------------------------------------------------------------
>
>                 Key: SPARK-9197
>                 URL: https://issues.apache.org/jira/browse/SPARK-9197
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.4.1
>            Reporter: Ryan Williams
>
> Currently, dynamic allocation cleans up executors that have not run any tasks 
> for a certain amount of time.
> However, this often leads to cached RDD partitions being lost.
> Should dynamic allocation leave executors alone that have cached partitions? 
> Should this be configurable?
> Is there any interest in code that would shuffle cached partitions around in 
> preparation for executor-deallocation, to avoid this? Such logic could be 
> useful in general for maintaining persisted RDDs across executor churn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to