Thanks Harsh,

I will make a note of it. I would be good if you can look my query for 
copyTable also as I am not able to copy my table from master to slave cluster.
Any Idea?

Thanks

-----Original Message-----
From: Harsh J [mailto:[email protected]] 
Sent: Wednesday, August 31, 2011 1:55 PM
To: [email protected]
Subject: Re: Facing issues in Import tool

Stuti,

Generally HBase never really expects you to give a raw path to the DB for any 
table writing/reading operation. Just table names are sufficient cause HBase 
maintains a 'table metadata' in itself.

On Wed, Aug 31, 2011 at 12:14 PM, Stuti Awasthi <[email protected]> wrote:
> Hi Friends
>
> I resolved this. Command should be :
> ./hadoop org.apache.hadoop.hbase.mapreduce.Import list /backup
>
> Now it worked and imported my data from /backup to list table. :)
>
>
> From: Stuti Awasthi
> Sent: Wednesday, August 31, 2011 12:06 PM
> To: [email protected]
> Subject: Facing issues in Import tool
>
> Hi,
> I was trying export/import utility but facing some issues while importing. I 
> have 2 cluster of Hbase with Hadoop say A and B.
> Here what I did :
>
> Cluster A:
>
> *         Created table 'list' in Hbase which is stored in /hbase in 
> Hadoop
>
> *         Exported table 'list' at /backup in Hadoop
>
> *         Distcp '/backup' to cluster B at location /backup in Hadoop.
>
> Cluster B : Now I have exported file of my table 'list' in other cluster B 
> which I try to import.
>
> *         Created table schema similar to 'list' with same name 'list' table 
> in cluster B.
>
> *         Tried to import data from  /backup to 'list' table.
>
> Comand is:  "./hadoop org.apache.hadoop.hbase.mapreduce.Import /hbase/list 
> /backup"
>
> Here : /hbase/list is empty table named "list" in Hbase
>                     /backup contains exported file from cluster A
>
> Error I am getting is :
> 11/08/31 11:33:39 WARN client.HConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch META table:
> org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in 
> .META. for table: /hbase/list, row=/hbase/list,,99999999999999
>        at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:1
> 36)
>        at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:9
> 5)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.prefetchRegionCache(HConnectionManager.java:648)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegionInMeta(HConnectionManager.java:702)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:593)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:558)
>        at 
> org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:172)
>        at 
> org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:146)
>        at 
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutpu
> tFormat.java:198)
>        at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62
> )
>        at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.jav
> a:117)
>        at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:76
> 8)
>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>        at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:448)
>        at 
> org.apache.hadoop.hbase.mapreduce.Import.main(Import.java:124)
> 11/08/31 11:33:39 ERROR mapreduce.TableOutputFormat: 
> org.apache.hadoop.hbase.TableNotFoundException: /hbase/list
> 11/08/31 11:33:39 INFO input.FileInputFormat: Total input paths to 
> process : 1
> 11/08/31 11:33:40 INFO mapred.JobClient: Running job: 
> job_201108302028_0005
> 11/08/31 11:33:41 INFO mapred.JobClient:  map 0% reduce 0%
> 11/08/31 11:33:52 INFO mapred.JobClient: Task Id : 
> attempt_201108302028_0005_m_000000_0, Status : FAILED 
> java.lang.NullPointerException
>
> Scan .Meta. result :
>
> hbase(main):002:0> scan '.META.'
> ROW                                         COLUMN+CELL
> list,,1314770593439.e054afd492290f53cc0a80 column=info:regioninfo, 
> timestamp=1314770593490, value=REGION => {NAME => 
> 'list,,1314770593439.e054afd492290f53cc0a8060b5a69
> 60b5a697bb.                                7bb.', STARTKEY => '', 
> ENDKEY => '', ENCODED => e054afd492290f53cc0a8060b5a697bb, TABLE => 
> {{NAME => 'list', FAMILIES => [{N
>                                            AME => 'info', BLOOMFILTER 
> => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS 
> => '3', TTL => '2147483647',
>                                             BLOCKSIZE => '65536', 
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}
> list,,1314770593439.e054afd492290f53cc0a80 column=info:server, 
> timestamp=1314770593537, value=127.0.0.1:52030 60b5a697bb.
> list,,1314770593439.e054afd492290f53cc0a80 
> column=info:serverstartcode, timestamp=1314770593537, value=1314716440847 
> 60b5a697bb.
>
>
> How to correctly import it  any ideas ?
>
> Stuti
>
> ________________________________
> ::DISCLAIMER::
> ----------------------------------------------------------------------
> -------------------------------------------------
>
> The contents of this e-mail and any attachment(s) are confidential and 
> intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its 
> affiliates. Any views or opinions presented in this email are solely those of 
> the author and may not necessarily reflect the opinions of HCL or its 
> affiliates.
> Any form of reproduction, dissemination, copying, disclosure, 
> modification, distribution and / or publication of this message 
> without the prior written consent of the author of this e-mail is 
> strictly prohibited. If you have received this email in error please delete 
> it and notify the sender immediately. Before opening any mail and attachments 
> please check them for viruses and defect.
>
> ----------------------------------------------------------------------
> -------------------------------------------------
>



--
Harsh J

Reply via email to