[
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13837122#comment-13837122
]
Nick Dimiduk commented on HBASE-9931:
-------------------------------------
[~lhofhansl], [~stack], [~apurtell] Any objections on this one?
> Optional setBatch for CopyTable to copy large rows in batches
> -------------------------------------------------------------
>
> Key: HBASE-9931
> URL: https://issues.apache.org/jira/browse/HBASE-9931
> Project: HBase
> Issue Type: Improvement
> Components: mapreduce
> Reporter: Dave Latham
> Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch
>
>
> We've had CopyTable jobs fail because a small number of rows are wide enough
> to not fit into memory. If we could specify the batch size for CopyTable
> scans that shoud be able to break those large rows up into multiple
> iterations to save the heap.
--
This message was sent by Atlassian JIRA
(v6.1#6144)