[ 
https://issues.apache.org/jira/browse/HBASE-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281077#comment-16281077
 ] 

churro morales commented on HBASE-11409:
----------------------------------------

Sure ill fix the parameter.  I'll fix the indentation no problem.

With the current LoadIncremental, if you want to use it with a depth of 3, you 
must have an existing table.  If you don't, it wont create a table for you, the 
old patch would, but that no longer works with master.  

I can fix those two issues and put up a new patch. 

> Add more flexibility for input directory structure to LoadIncrementalHFiles
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-11409
>                 URL: https://issues.apache.org/jira/browse/HBASE-11409
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>            Reporter: churro morales
>            Assignee: churro morales
>         Attachments: HBASE-11409.v1.patch
>
>
> Use case:
> We were trying to combine two very large tables into a single table.  Thus we 
> ran jobs in one datacenter that populated certain column families and another 
> datacenter which populated other column families.  Took a snapshot and 
> exported them to their respective datacenters.  Wanted to simply take the 
> hdfs restored snapshot and use LoadIncremental to merge the data.  
> It would be nice to add support where we could run LoadIncremental on a 
> directory where the depth of store files is something other than two (current 
> behavior).  
> With snapshots it would be nice if you could pass a restored hdfs snapshot's 
> directory and have the tool run.  
> I am attaching a patch where I parameterize the bulkLoad timeout as well as 
> the default store file depth.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to