Take a look at 
http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.html

Cheers

On Sep 11, 2013, at 4:42 AM, John <[email protected]> wrote:

> Hi,
> 
> thanks for your fast answer! with size becoming too big I mean I have one
> row with thousands of columns. For example:
> 
> myrowkey1 -> column1, column2, column3 ... columnN
> 
> What do you mean with "change the batch size"? I try to create a little
> java test code to reproduce the problem. It will take a moment
> 
> 
> 
> 
> 2013/9/11 Jean-Marc Spaggiari <[email protected]>
> 
>> Hi John,
>> 
>> Just to be sure. What is " the size become too big"? The size of a single
>> column within this row? Or the number of columns?
>> 
>> If it's the number of columns, you can change the batch size to get less
>> columns in a single call? Can you share the relevant piece of code doing
>> the call?
>> 
>> JM
>> 
>> 
>> 2013/9/11 John <[email protected]>
>> 
>>> Hi,
>>> 
>>> I store a lot of columns for one row key and if the size become to big
>> the
>>> relevant Region Server crashs if I try to get or scan the row. For
>> example
>>> if I try to get the relevant row I got this error:
>>> 
>>> 2013-09-11 12:46:43,696 WARN org.apache.hadoop.ipc.HBaseServer:
>>> (operationTooLarge): {"processingtimems":3091,"client":"
>> 192.168.0.34:52488
>>> ","ti$
>>> 
>>> If I try to load the relevant row via Apache Pig and the HBaseStorage
>>> Loader (use the scan operation) I got this message and after that the
>>> Region Servers crashs:
>>> 
>>> 2013-09-11 10:30:23,542 WARN org.apache.hadoop.ipc.HBaseServer:
>>> (responseTooLarge):
>>> {"processingtimems":1851,"call":"next(-588368116791418695,
>>> 1), rpc version=1, client version=29,$
>>> 
>>> I'm using Cloudera 4.4.0 with 0.94.6-cdh4.4.0
>>> 
>>> Any clues?
>>> 
>>> regards
>> 

Reply via email to