[
https://issues.apache.org/jira/browse/HBASE-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12672920#action_12672920
]
Andrew Purtell commented on HBASE-1197:
---------------------------------------
From: Ryan Rawson
To: [email protected]
I doubt we could chunk values straight into HFile - you'd have to have 1file
for 1 value. If your value is that large (more than hundreds of megs), maybe
you shouldn't be storing that in HBase - store directly in HDFS and use HBase
to index the content and provide filename pointers.
As it stands, right now a key/value has to live in memcache for some period of
time (seconds? minutes?), so storing an entire key/value has to be feasable.
Not supporting chunking/streaming doesn't seem to be a major deficiency.
I think of hbase as a way of making it possible to effiency store smallish
values on HDFS. I think we should support reasonably large values, but right
now there is a 2gb value max size (int size for value). With enough RAM thrown
at HBase it should be possible to support nearly all of that size.
> IPC of large cells should transfer in chunks not via naive full copy
> --------------------------------------------------------------------
>
> Key: HBASE-1197
> URL: https://issues.apache.org/jira/browse/HBASE-1197
> Project: Hadoop HBase
> Issue Type: Sub-task
> Reporter: Andrew Purtell
> Fix For: 0.20.0
>
>
> Several instances of OOME when trying to serve up large cells to clients have
> been observed. IPC should send large cell content in chunks instead of as one
> large naive copy.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.