[
https://issues.apache.org/jira/browse/SPARK-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matei Zaharia updated SPARK-2711:
---------------------------------
Target Version/s: 1.1.0
> Create a ShuffleMemoryManager that allocates across spilling collections in
> the same task
> -----------------------------------------------------------------------------------------
>
> Key: SPARK-2711
> URL: https://issues.apache.org/jira/browse/SPARK-2711
> Project: Spark
> Issue Type: Improvement
> Reporter: Matei Zaharia
> Assignee: Matei Zaharia
> Priority: Critical
>
> Right now if there are two ExternalAppendOnlyMaps, they don't compete
> correctly for memory. This can happen e.g. in a task that is both reducing
> data from its parent RDD and writing it out to files for a future shuffle,
> for instance if you do rdd.groupByKey(...).map(...).groupByKey(...) (another
> key).
--
This message was sent by Atlassian JIRA
(v6.2#6252)