[
https://issues.apache.org/jira/browse/SPARK-35533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
abhishek kumar tiwari updated SPARK-35533:
------------------------------------------
Description:
In current block manager decommissioning flow, existing cached blocks in memory
are dropped if enough memory is not available to accommodate blocks from
decommissioned block manager.
Why should blocks from a decommissioned block manager have more priority than
an already cached block?
We should place blocks from decommission block manager on a peer block manager
only when enough memory is available
was:
In current block manager decommissioning flow, existing cached blocks in memory
are dropped if enough memory is not available to accommodate blocks from
decommissioned block manager.
Why should blocks from a
> Do not drop cached RDD blocks to accommodate blocks from decommissioned block
> manager if enough memory is not available
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-35533
> URL: https://issues.apache.org/jira/browse/SPARK-35533
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 3.1.1
> Reporter: abhishek kumar tiwari
> Priority: Major
> Fix For: 3.2.0
>
>
> In current block manager decommissioning flow, existing cached blocks in
> memory are dropped if enough memory is not available to accommodate blocks
> from decommissioned block manager.
>
> Why should blocks from a decommissioned block manager have more priority than
> an already cached block?
> We should place blocks from decommission block manager on a peer block
> manager only when enough memory is available
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]