[ 
https://issues.apache.org/jira/browse/HBASE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13636188#comment-13636188
 ] 

Hudson commented on HBASE-7239:
-------------------------------

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #504 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/504/])
    HBASE-7239. Introduces chunked reading for large cellblocks (Revision 
1469654)

     Result = FAILURE
ddas : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java

                
> Verify protobuf serialization is correctly chunking upon read to avoid direct 
> memory OOMs
> -----------------------------------------------------------------------------------------
>
>                 Key: HBASE-7239
>                 URL: https://issues.apache.org/jira/browse/HBASE-7239
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Lars Hofhansl
>            Assignee: Devaraj Das
>            Priority: Critical
>             Fix For: 0.95.1
>
>         Attachments: 7239-1.patch
>
>
> Result.readFields() used to read from the input stream in 8k chunks to avoid 
> OOM issues with direct memory.
> (Reading variable sized chunks into direct memory prevent the JVM from 
> reusing the allocated direct memory and direct memory is only collected 
> during full GCs)
> This is just to verify protobufs parseFrom type methods do the right thing as 
> well so that we do not reintroduce this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to