[
https://issues.apache.org/jira/browse/SPARK-34541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean R. Owen resolved SPARK-34541.
----------------------------------
Fix Version/s: 3.2.0
Resolution: Fixed
Issue resolved by pull request 31664
[https://github.com/apache/spark/pull/31664]
> Fixed an issue where data could not be cleaned up when unregisterShuffle
> ------------------------------------------------------------------------
>
> Key: SPARK-34541
> URL: https://issues.apache.org/jira/browse/SPARK-34541
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 3.0.0
> Reporter: yikf
> Assignee: yikf
> Priority: Minor
> Fix For: 3.2.0
>
>
> While we use the old shuffle fetch protocol, we use partitionId as mapId in
> the ShuffleBlockId construction,but we use `context.taskAttemptId()` as mapId
> that it is cached in `taskIdMapsForShuffle` when we `getWriter[K, V]`.
> where data could not be cleaned up when unregisterShuffle ,because we remove
> a shuffle's metadata from the `taskIdMapsForShuffle`'s mapIds, the mapId is
> `context.taskAttemptId()` instead of partitionId.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]