[ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829104#comment-13829104
 ] 

Dave Latham commented on HBASE-9931:
------------------------------------

I think a new config option to call Scan.setBatch would do it.  I'm always a 
bit puzzled about how scanner batching and caching interact with mixed length 
rows, but I imagine it would work out ok.  (4 years later, and still using a 
version where HBASE-1996 didn't make it.)

> Optional setBatch for CopyTable to copy large rows in batches
> -------------------------------------------------------------
>
>                 Key: HBASE-9931
>                 URL: https://issues.apache.org/jira/browse/HBASE-9931
>             Project: HBase
>          Issue Type: Improvement
>          Components: mapreduce
>            Reporter: Dave Latham
>
> We've had CopyTable jobs fail because a small number of rows are wide enough 
> to not fit into memory.  If we could specify the batch size for CopyTable 
> scans that shoud be able to break those large rows up into multiple 
> iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to