Also Ashish while specifying region location is there any option to use
regular expression?

On Thu, Mar 16, 2017 at 5:55 PM, Rajeshkumar J <[email protected]>
wrote:

> thanks ashish. I got that as that region doesn't contain any data and data
> is available in other regions.
>
> On Thu, Mar 16, 2017 at 5:48 PM, ashish singhi <[email protected]>
> wrote:
>
>> Was any data added into this table region ? If not then you can skip this
>> region directory from completebulkload.
>>
>> -----Original Message-----
>> From: Rajeshkumar J [mailto:[email protected]]
>> Sent: 16 March 2017 17:44
>> To: [email protected]
>> Subject: Re: hbase table creation
>>
>> Ashish,
>>
>>     I have tried as u said but I dont have any data in this folder
>>
>> /hbase/tmp/t1/region1/d
>>
>> So in the log
>>
>> 2017-03-16 13:12:40,120 WARN  [main] mapreduce.LoadIncrementalHFiles:
>> Bulk load operation did not find any files to load in directory
>> /hbase/tmp/t1/region1.  Does it contain files in subdirectories that
>> correspond to column family names?
>>
>> So is this data corrupted?
>>
>>
>>
>> On Thu, Mar 16, 2017 at 5:14 PM, ashish singhi <[email protected]>
>> wrote:
>>
>> > Hi,
>> >
>> > You can try completebulkload tool to load the data into the table.
>> > Below is the command usage,
>> >
>> > hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> >
>> > usage: completebulkload /path/to/hfileoutputformat-output tablename
>> > -Dcreate.table=no - can be used to avoid creation of table by this tool
>> >   Note: if you set this to 'no', then the target table must already
>> > exist in HBase.
>> >
>> >
>> > For example:
>> > Consider tablename as t1 you have copied the data of t1 from cluster1
>> > to
>> > /hbase/tmp/t1 directory in cluster2 .
>> > Delete the recovered.edits directory or any other directory except
>> > column family directory(store dir) from the region directory of that
>> > table, Suppose you have two regions in the table t1 and list output of
>> > table dir is like below
>> >
>> > ls /hbase/tmp/t1
>> >
>> > drwxr-xr-x    /hbase/tmp/t1/.tabledesc
>> > -rw-r--r--    /hbase/tmp/t1/.tabledesc/.tableinfo.0000000001
>> > drwxr-xr-x    /hbase/tmp/t1/.tmp
>> > drwxr-xr-x    /hbase/tmp/t1/region1
>> > -rw-r--r--    /hbase/tmp/t1/region1/.regioninfo
>> > drwxr-xr-x    /hbase/tmp/t1/region1/d
>> > -rwxrwxrwx    /hbase/tmp/t1/region1/d/0fcaf624cf124d7cab50ace0a6f0f9
>> > df_SeqId_4_
>> > drwxr-xr-x    /hbase/tmp/t1/region1/recovered.edits
>> > -rw-r--r--    /hbase/tmp/t1/region1/recovered.edits/2.seqid
>> > drwxr-xr-x    /hbase/tmp/t1/region2
>> > -rw-r--r--    /hbase/tmp/t1/region2/.regioninfo
>> > drwxr-xr-x    /hbase/tmp/t1/region2/d
>> > -rwxrwxrwx    /hbase/tmp/t1/region2/d/14925680d8a5457e9be1c05087f44d
>> > f5_SeqId_4_
>> > drwxr-xr-x    /hbase/tmp/t1/region2/recovered.edits
>> > -rw-r--r--    /hbase/tmp/t1/region2/recovered.edits/2.seqid
>> >
>> > Delete the /hbase/tmp/t1/region1/recovered.edits and
>> > /hbase/tmp/t1/region2/recovered.edits
>> >
>> > And now run the completebulkload for each region like below,
>> >
>> > 1) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> > /hbase/tmp/t1/region1 t1
>> > 2) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> > /hbase/tmp/t1/region2 t1
>> >
>> > Note: The tool will create the table if doesn't exist with only one
>> > region. If you want the same table properties as it is in cluster1
>> > then you will have to create it manually in cluster2.
>> >
>> > I hope this helps.
>> >
>> > Regards,
>> > Ashish
>> >
>> > -----Original Message-----
>> > From: Rajeshkumar J [mailto:[email protected]]
>> > Sent: 16 March 2017 16:46
>> > To: [email protected]
>> > Subject: Re: hbase table creation
>> >
>> > ​Karthi,
>> >
>> >    I have mentioned that as of now I dont have any data in that old
>> > cluster. Now only have that copied files in the new cluster. I think i
>> > can't use this utility?​
>> >
>> > On Thu, Mar 16, 2017 at 4:10 PM, karthi keyan
>> > <[email protected]>
>> > wrote:
>> >
>> > > Ted-
>> > >
>> > > Cool !! Will consider hereafter .
>> > >
>> > > On Thu, Mar 16, 2017 at 4:06 PM, Ted Yu <[email protected]> wrote:
>> > >
>> > > > karthi:
>> > > > The link you posted was for 0.94
>> > > >
>> > > > We'd better use up-to-date link from refguide (see my previous
>> reply).
>> > > >
>> > > > Cheers
>> > > >
>> > > > On Thu, Mar 16, 2017 at 3:26 AM, karthi keyan
>> > > > <[email protected]
>> > > >
>> > > > wrote:
>> > > >
>> > > > > Rajesh,
>> > > > >
>> > > > > Use HBase snapshots for backup and move the data from your "
>> > > > > /hbase/default/data/testing" with its snapshot and clone them to
>> > > > > your destination cluster.
>> > > > >
>> > > > > Snapshot ref link  - http://hbase.apache.org/0.94/
>> > > > book/ops.snapshots.html
>> > > > > <http://hbase.apache.org/0.94/book/ops.snapshots.html>
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Thu, Mar 16, 2017 at 3:51 PM, sudhakara st
>> > > > > <[email protected]>
>> > > > > wrote:
>> > > > >
>> > > > > > You have to use 'copytable', here is more info
>> > > > > > https://hbase.apache.org/book.html#copy.table
>> > > > > >
>> > > > > > On Thu, Mar 16, 2017 at 3:46 PM, Rajeshkumar J <
>> > > > > > [email protected]>
>> > > > > > wrote:
>> > > > > >
>> > > > > > > I have copied hbase data of a table from one cluster to
>> another.
>> > > For
>> > > > > > > instance I have a table testing and its data will be in the
>> > > > > > > path /hbase/default/data/testing
>> > > > > > >
>> > > > > > > I have copied these files from existing cluster to new
>> > > > > > > cluster. Is
>> > > > > there
>> > > > > > > any possibilty to create table and load data from these
>> > > > > > > files in
>> > > the
>> > > > > new
>> > > > > > > cluster
>> > > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > --
>> > > > > >
>> > > > > > Regards,
>> > > > > > ...sudhakara
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Reply via email to