That fixes the issue.

HBaseFsck found missing tables after scanning hdfs (possibly from previous
release of HBase - I installed 0.20.5 recently):
ERROR: Path hdfs://
sjc9-flash-grid04.ciq.com:9000/hbase/TRIAL-ERRORS-1277252980233-0 does not
have a corresponding entry in META.

Is there a way to add those tables back ?

Thanks

On Tue, Jul 6, 2010 at 9:08 AM, Stack <[email protected]> wrote:

> HBaseFsck does this:
>
>    conf.set("fs.defaultFS", conf.get("hbase.rootdir"));
>
> Add this line:
>
>    conf.set("fs.default.name", conf.get("hbase.rootdir"));
>
> See if that fixes it (The former is new way of spec'ing defaultFS
> while latter is oldstyle).
>
> St.Ack
>
> On Mon, Jul 5, 2010 at 6:25 PM, Ted Yu <[email protected]> wrote:
> > I assume the conf directory is that of HBase.
> >
> > I use this command previously:
> > bin/hbase hbck
> >
> > I tried this today:
> > bin/hbase org.apache.hadoop.hbase.client.HBaseFsck
> >
> > Result is the same.
> >
> > I do see conf in the classpath:
> > 10/07/05 18:12:32 INFO zookeeper.ZooKeeper: Client
> > environment:java.class.path=/home/hadoop/mmp/234_x/hbase/conf:...
> > ...
> > rootDir: hdfs://sjc9-flash-grid04.carrieriq.com:9000/hbase hdfs://
> > sjc9-flash-grid04.carrieriq.com:9000/hbase
> > Version: 0.20.5
> > 10/07/05 18:12:32 DEBUG zookeeper.ZooKeeperWrapper: Read ZNode
> > /hbase/root-region-server got 10.32.56.159:60020
> > 10/07/05 18:12:32 DEBUG client.HConnectionManager$TableServers: Found
> ROOT
> > at 10.32.56.159:60020
> > 10/07/05 18:12:32 DEBUG client.HConnectionManager$TableServers: Cached
> > location for .META.,,1 is 10.32.56.159:60020
> >
> > Number of Tables: 0
> > Number of live region servers:2
> > Number of dead region servers:0
> > Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS:
> > hdfs://sjc9-flash-grid04.carrieriq.com:9000/hbase, expected: file:///
> >        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
> >
> >
> > On Mon, Jul 5, 2010 at 10:24 AM, Stack <[email protected]> wrote:
> >
> >> Make sure conf directory is in your classpath.  If it is, it might the
> >> case that you need something like the below:
> >>
> >> # Set hadoop filesystem configuration using the hbase.rootdir.
> >> # Otherwise, we'll always use localhost though the hbase.rootdir
> >> # might be pointing at hdfs location.
> >> c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
> >> fs = FileSystem.get(c)
> >>
> >> The above is copied from the jruby scripts in the bin dir......
> >>
> >> ...though looking at the HBaseFsck it does this.
> >>
> >> So it must be a case of your not setting up the classpath properly?
> >>
> >> You've set the target hdfs in your hbase-site.xml and then you've
> >> launched the script as per:
> >>
> >> ./bin/hbase org.apache.hadoop.hbase.client.HBaseFsck
> >>
> >> (The above will ensure your classpath is set properly).
> >>
> >> St.Ack
> >>
> >>
> >>
> >> On Sat, Jul 3, 2010 at 9:51 AM, Ted Yu <[email protected]> wrote:
> >> > I produced patched version of HBaseFsck.java which is attached.
> >> >
> >> > When I ran it, I got:
> >> >
> >> > Version: 0.20.5
> >> > 10/07/03 09:41:38 DEBUG zookeeper.ZooKeeperWrapper: Read ZNode
> >> > /hbase/root-region-server got 10.32.56.159:60020
> >> > 10/07/03 09:41:38 DEBUG client.HConnectionManager$TableServers: Found
> >> ROOT
> >> > at 10.32.56.159:60020
> >> > 10/07/03 09:41:38 DEBUG client.HConnectionManager$TableServers: Cached
> >> > location for .META.,,1 is 10.32.56.160:60020
> >> >
> >> > Number of Tables: 0
> >> > Number of live region servers:2
> >> > Number of dead region servers:0
> >> > Exception in thread "main" java.lang.IllegalArgumentException: Wrong
> FS:
> >> > hdfs://sjc9-flash-grid04.carrieriq.com:9000/hbase, expected: file:///
> >> >         at
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
> >> >         at
> >> >
> >>
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
> >> >         at
> >> >
> >>
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:273)
> >> >         at
> >> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:721)
> >> >         at
> >> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:746)
> >> >         at
> >> >
> >>
> org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:465)
> >> >         at
> >> > org.apache.hadoop.hbase.client.HBaseFsck.checkHdfs(HBaseFsck.java:192)
> >> >         at
> >> > org.apache.hadoop.hbase.client.HBaseFsck.doWork(HBaseFsck.java:165)
> >> >         at
> >> org.apache.hadoop.hbase.client.HBaseFsck.main(HBaseFsck.java:533)
> >> > 10/07/03 09:41:38 INFO zookeeper.ZooKeeper: Closing session:
> >> > 0x1299926deb30004
> >> >
> >> > Please comment.
> >> >
> >> > On Sat, Jul 3, 2010 at 7:23 AM, Ted Yu <[email protected]> wrote:
> >> >>
> >> >> Hi,
> >> >> I tried to compile HBaseFsck.java for 0.20.5 but got:
> >> >>
> >> >> compile-core:
> >> >>     [javac] Compiling 338 source files to
> >> >> /Users/tyu/hbase-0.20.5/build/classes
> >> >>     [javac]
> >> >>
> >>
> /Users/tyu/hbase-0.20.5/src/java/org/apache/hadoop/hbase/client/HBaseFsck.java:95:
> >> >> cannot find symbol
> >> >>     [javac] symbol  : constructor
> >> >> HBaseAdmin(org.apache.hadoop.conf.Configuration)
> >> >>     [javac] location: class org.apache.hadoop.hbase.client.HBaseAdmin
> >> >>     [javac]     super(conf);
> >> >>     [javac]     ^
> >> >>     [javac]
> >> >>
> >>
> /Users/tyu/hbase-0.20.5/src/java/org/apache/hadoop/hbase/client/HBaseFsck.java:447:
> >> >> cannot find symbol
> >> >>     [javac] symbol  : method
> >> >>
> >>
> metaScan(org.apache.hadoop.conf.Configuration,org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor)
> >> >>     [javac] location: class
> org.apache.hadoop.hbase.client.MetaScanner
> >> >>     [javac]       MetaScanner.metaScan(conf, visitor);
> >> >>     [javac]                  ^
> >> >>     [javac]
> >> >>
> >>
> /Users/tyu/hbase-0.20.5/src/java/org/apache/hadoop/hbase/client/HBaseFsck.java:503:
> >> >> cannot find symbol
> >> >>     [javac] symbol  : method create()
> >> >>     [javac] location: class
> org.apache.hadoop.hbase.HBaseConfiguration
> >> >>     [javac]     Configuration conf = HBaseConfiguration.create();
> >> >>     [javac]                                            ^
> >> >>     [javac] Note: Some input files use or override a deprecated API.
> >> >>     [javac] Note: Recompile with -Xlint:deprecation for details.
> >> >>     [javac] Note: Some input files use unchecked or unsafe
> operations.
> >> >>     [javac] Note: Recompile with -Xlint:unchecked for details.
> >> >>     [javac] 3 errors
> >> >>
> >> >> Advice is welcome.
> >> >
> >> >
> >>
> >
>

Reply via email to