[
https://issues.apache.org/jira/browse/HBASE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13636114#comment-13636114
]
Hudson commented on HBASE-7239:
-------------------------------
Integrated in hbase-0.95-on-hadoop2 #74 (See
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/74/])
HBASE-7239. Introduces chunked reading for large cellblocks (Revision
1469655)
Result = FAILURE
ddas :
Files :
*
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
*
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
> Verify protobuf serialization is correctly chunking upon read to avoid direct
> memory OOMs
> -----------------------------------------------------------------------------------------
>
> Key: HBASE-7239
> URL: https://issues.apache.org/jira/browse/HBASE-7239
> Project: HBase
> Issue Type: Sub-task
> Reporter: Lars Hofhansl
> Assignee: Devaraj Das
> Priority: Critical
> Fix For: 0.95.1
>
> Attachments: 7239-1.patch
>
>
> Result.readFields() used to read from the input stream in 8k chunks to avoid
> OOM issues with direct memory.
> (Reading variable sized chunks into direct memory prevent the JVM from
> reusing the allocated direct memory and direct memory is only collected
> during full GCs)
> This is just to verify protobufs parseFrom type methods do the right thing as
> well so that we do not reintroduce this problem.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira