[
https://issues.apache.org/jira/browse/SPARK-55341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Herman van Hövell resolved SPARK-55341.
---------------------------------------
Fix Version/s: 4.2.0
Resolution: Fixed
Issue resolved by pull request 54118
[https://github.com/apache/spark/pull/54118]
> Add disk only optional feature flag when adding cached blocks
> -------------------------------------------------------------
>
> Key: SPARK-55341
> URL: https://issues.apache.org/jira/browse/SPARK-55341
> Project: Spark
> Issue Type: Task
> Components: Connect, Spark Core
> Affects Versions: 4.2.0
> Reporter: Pranav Dev
> Assignee: Pranav Dev
> Priority: Minor
> Labels: pull-request-available
> Fix For: 4.2.0
>
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> Cached artifact blocks in ArtifactManager currently use
> {{MEMORY_AND_DISK_SER}} storage level. In some scenarios with large
> artifacts, especially large local relations, this can cause memory pressure.
> We can add a flag to control the storage level used for cached blocks:
> * When enabled: uses {{DISK_ONLY}} storage level to reduce memory pressure
> * When disabled (default): uses {{MEMORY_AND_DISK_SER}} storage level
> (current behavior)
> This allows users to opt into disk-only storage for cached artifacts when
> memory is constrained, while maintaining backward compatibility with the
> default behavior.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]