[
https://issues.apache.org/jira/browse/SPARK-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16492930#comment-16492930
]
Imran Rashid commented on SPARK-6190:
-------------------------------------
[~jamgan] yes, this particular task was done a while ago. However I'm not sure
what that meant for the end-user behavior by itself. I put more comments on
the parent jira, SPARK-6235, about the current state of things in spark 2.3 and
what is left to do.
> create LargeByteBuffer abstraction for eliminating 2GB limit on blocks
> ----------------------------------------------------------------------
>
> Key: SPARK-6190
> URL: https://issues.apache.org/jira/browse/SPARK-6190
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Reporter: Imran Rashid
> Assignee: Josh Rosen
> Priority: Major
> Attachments: LargeByteBuffer_v3.pdf
>
>
> A key component in eliminating the 2GB limit on blocks is creating a proper
> abstraction for storing more than 2GB. Currently spark is limited by a
> reliance on nio ByteBuffer and netty ByteBuf, both of which are limited at
> 2GB. This task will introduce the new abstraction and the relevant
> implementation and utilities, without effecting the existing implementation
> at all.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]