Matei Zaharia created SPARK-2047:
------------------------------------
Summary: Use less memory in AppendOnlyMap.destructiveSortedIterator
Key: SPARK-2047
URL: https://issues.apache.org/jira/browse/SPARK-2047
Project: Spark
Issue Type: Improvement
Components: Spark Core
Reporter: Matei Zaharia
This method tries to sort an the key-value pairs in the map in-place but ends
up allocating a Tuple2 object for each one, which allocates a nontrivial amount
of memory (32 or more bytes per entry on a 64-bit JVM). We could instead try to
sort the objects in-place within the "data" array, or allocate an int array
with the indices and sort those using a custom comparator. The latter is
probably easiest to begin with.
--
This message was sent by Atlassian JIRA
(v6.2#6252)