Hi Henry, I would suggest you to try out 'hbck' utility of Hbase for dealing with corrupt metadata in HBase. It's pretty effective and clean. If you use that then you wont have to deal with manual cleanup most of the time. Run: 'hbase hbck -help' command for more details.
Here is a useful link of HBase wiki: http://hbase.apache.org/book/hbck.in.depth.html HTH, Anil Gupta On Thu, Aug 9, 2012 at 5:00 AM, Jean-Marc Spaggiari <[email protected] > wrote: > Hi Henry, > > I faced the same issue not so long time ago... > > Can you take a look at what you have in zookeeper under /hbase/table ? > > If you table is there, that's why you see it on the list. Simply > remove it from zookeeper. > > You can also take a look there: > https://issues.apache.org/jira/browse/HBASE-6294 > > JM > > 2012/8/9, henry.kim <[email protected]>: > > hi, hbase users. > > > > I got a problem when I am testing coprocessors which is released at base > > 0.92.1. > > > > here is the hbase shell outputs > > > > ---------------- > > hbase(main):001:0> truncate 'blog' > > Truncating 'blog' table (it may take a while): > > > > ERROR: Unknown table blog! > > > > Here is some help for this command: > > Disables, drops and recreates the specified table. > > > > > > hbase(main):002:0> list > > TABLE > > > > > > blog > > > > > > counter > > > > > > sidx > > > > > > 3 row(s) in 15.3610 seconds > > ---------------- > > > > yes, table 'blog' is a zombie I think. > > > > there was some big stress from coprocessors which is using > > RegionServerObserver interfaces. > > > > how could I fix this situation? > -- Thanks & Regards, Anil Gupta
