[
https://issues.apache.org/jira/browse/SPARK-35533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
abhishek kumar tiwari updated SPARK-35533:
------------------------------------------
Summary: Do not drop cached RDD blocks to accommodate blocks from
decommissioned block manager if enough memory is not available (was: Do not
drop cached RDD blocks to accommodate blocks from decommissioning block manager
if enough memory is not available)
> Do not drop cached RDD blocks to accommodate blocks from decommissioned block
> manager if enough memory is not available
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-35533
> URL: https://issues.apache.org/jira/browse/SPARK-35533
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 3.1.1
> Reporter: abhishek kumar tiwari
> Priority: Major
> Fix For: 3.2.0
>
>
> In current block manager decommissioning flow, existing cached blocks in
> memory are dropped if enough memory is not available to accommodate blocks
> from decommissioned block manager.
>
> Why should blocks from a
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]