[ 
https://issues.apache.org/jira/browse/HBASE-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14320582#comment-14320582
 ] 

Andrew Purtell commented on HBASE-13031:
----------------------------------------

bq. So you're suggesting Export to sequence files, then build another MR job to 
translate those to HFiles, then copy those to the remote cluster for bulk load? 
Definitely better than Importing through regionservers, but still seems to 
require a couple of extra copies when the data is already sitting in HFiles.

Close.

0. Set up replication at time T. 

1. Export manageable keyranges from time 0 to T-1 to compressed sequencefiles. 
Use bzip or even xz (slow compressor!!!) to squeeze out as much redundancy as 
possible. This will be better than compression you'll ever see in an HFile. 
Also is independent of compaction, you won't get junk/redundant HFiles in the 
mix, just exactly the KVs present in the keyspace. 

2. Transfer the maximally compressed sequencefiles over the wide area link.

3. Convert the sequencefiles to HFiles. Resources here should be mostly spare 
until the cluster goes live so there's headroom to spare. 

4. Bulk load.

5. Rinse and repeat until all of the keyspace has been transferred.


> Ability to snapshot based on a key range
> ----------------------------------------
>
>                 Key: HBASE-13031
>                 URL: https://issues.apache.org/jira/browse/HBASE-13031
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: churro morales
>            Assignee: churro morales
>             Fix For: 2.0.0, 0.94.26, 1.1.0, 0.98.11
>
>
> Posted on the mailing list and seems like some people are interested.  A 
> little background for everyone.
> We have a very large table, we would like to snapshot and transfer the data 
> to another cluster (compressed data is always better to ship).  Our problem 
> lies in the fact it could take many weeks to transfer all of the data and 
> during that time with major compactions, the data stored in dfs has the 
> potential to double which would cause us to run out of disk space.
> So we were thinking about allowing the ability to snapshot a specific key 
> range.  
> Ideally I feel the approach is that the user would specify a start and stop 
> key, those would be associated with a region boundary.  If between the time 
> the user submits the request and the snapshot is taken the boundaries change 
> (due to merging or splitting of regions) the snapshot should fail.
> We would know which regions to snapshot and if those changed between when the 
> request was submitted and the regions locked, the snapshot could simply fail 
> and the user would try again, instead of potentially giving the user more / 
> less than what they had anticipated.  I was planning on storing the start / 
> stop key in the SnapshotDescription and from there it looks pretty straight 
> forward where we just have to change the verifier code to accommodate the key 
> ranges.  
> If this design sounds good to anyone, or if I am overlooking anything please 
> let me know.  Once we agree on the design, I'll write and submit the patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to