[ https://issues.apache.org/jira/browse/SPARK-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-13921. -------------------------------- Resolution: Fixed Fix Version/s: 2.0.0 Issue resolved by pull request 11748 [https://github.com/apache/spark/pull/11748] > Store serialized blocks as multiple chunks in MemoryStore > --------------------------------------------------------- > > Key: SPARK-13921 > URL: https://issues.apache.org/jira/browse/SPARK-13921 > Project: Spark > Issue Type: Improvement > Components: Block Manager > Reporter: Josh Rosen > Assignee: Josh Rosen > Fix For: 2.0.0 > > > Instead of storing serialized blocks in individual ByteBuffers, the > BlockManager should be capable of storing a serialized block in multiple > chunks, each occupying a separate ByteBuffer. > This change will help to improve the efficiency of memory allocation and the > accuracy of memory accounting when serializing blocks. Our current > serialization code uses a {{ByteBufferOutputStream}}, which doubles and > re-allocates its backing byte array; this increases the peak memory > requirements during serialization (since we need to hold extra memory while > expanding the array). In addition, we currently don't account for the extra > wasted space at the end of the ByteBuffer's backing array, so a 129 megabyte > serialized block may actually consume 256 megabytes of memory. After > switching to storing blocks in multiple chunks, we'll be able to efficiently > trim the backing buffers so that no space is wasted. > This change is also a prerequisite to being able to cache blocks which are > larger than 2GB (although full support for that depends on several other > changes which have not bee implemented yet). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org