Hi Lars, For for pointing me to the right direction.
I have already restarted ZK but I did not removed anything on its server. Here is the output of the ZK ls. There is few tables I already removed (test3, work, etc.)... [zk: cube(CONNECTED) 8] ls /hbase/table [work, work_sent, .META., -ROOT-, work_proposed, test3] I removed the table using rmr /hbase/table/work is that safe? Or am I still better to restart everything? JM 2012/7/5, Lars George <[email protected]>: > Hi JM, > > So you already wiped everything on the HDFS level? The only thing left is > ZooKeeper. It should not hold you back, but it could be having an entry in > /hbase/table? Could you try the ZK shell and do an ls on that znode? > > If at all, if you wipe HDFS anyways, please also try wiping the ZK data and > try again. > > Lars > > On Jul 5, 2012, at 1:06 PM, Jean-Marc Spaggiari wrote: > >> Hi, >> >> Yesterday I stopped my cluster because of a storm. It did not went up >> well so I have formated the hadoop FS and restarted it. >> >> Now, when I'm trying to re-create my schema, I'm facing some issues. >> It's telling me that the table don't exist when I want to delete it, >> but that the table exist when I try to create it. >> >> Here is the simple code. >> >> HBaseAdmin admin = new HBaseAdmin(config); >> if (admin.tableExists(Constants.TABLE_WORK)) >> { >> admin.disableTable(Constants.TABLE_WORK); >> admin.deleteTable(Constants.TABLE_WORK); >> } >> admin.createTable(table_work); >> admin.close(); >> >> tableExists return true. But createTable return the expection below. >> >> I have done a rm -rf on the hadoop storage directory and formated the >> namenode. So what did I missed? If I try to put some data into this >> table I get org.apache.hadoop.hbase.TableNotFoundException: Cannot >> find row in .META. for table: work, row=work,,99999999999999 but if I >> look at the web interface, there is no table anywhere except META and >> ROOT. >> >> Do you have any idea where I should look at? >> >> Thanks, >> >> JM >> >> org.apache.hadoop.hbase.TableExistsException: >> org.apache.hadoop.hbase.TableExistsException: work >> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) >> at >> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) >> at >> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) >> at java.lang.reflect.Constructor.newInstance(Constructor.java:532) >> at >> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) >> at >> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) >> at >> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:492) >> at >> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:394) >> at >> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:330) >> at >> org.spaggiari.distparser.servlet.GetWorkload.main(GetWorkload.java:1071) >> Caused by: org.apache.hadoop.ipc.RemoteException: >> org.apache.hadoop.hbase.TableExistsException: work >> at >> org.apache.hadoop.hbase.master.handler.CreateTableHandler.<init>(CreateTableHandler.java:103) >> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1102) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:601) >> at >> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) >> at >> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376) >> >> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918) >> at >> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150) >> at $Proxy1.createTable(Unknown Source) >> at >> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:490) >> ... 3 more > >
