Jonathan made a good point.
I tried 'bin/hbase classpath' on the cluster and didn't see hdfs-site.xml
in the output.

I think we should relax the requirement that hdfs-site.xml is in the
classpath of hbck.

Cheers

On Thu, Jan 5, 2012 at 4:53 PM, Jonathan Hsieh <[email protected]> wrote:

> Actually, fs.default.name should be pulled from your hdfs-site.xml file --
> if you do a hbase classpath, can you tell which hdfs-site.xml file is being
> pulled in?
>
> Jon.
>
> On Thu, Jan 5, 2012 at 4:26 PM, Jonathan Hsieh <[email protected]> wrote:
>
> > Hey Vlad,
> >
> > I wrote the tool -- and I've used it to repair a fairly messed up META
> > table.  I must of used on a local filesystem copy of META (just got all
> the
> > .regioninfo files in their directory paths), and then shipped the
> repaired
> > version of the .META. dir to the customer.
> >
> > This is definitely a bug.  FIle the jira and I'll try to fix in the next
> > few days.
> >
> > Jon.
> >
> >
> > On Thu, Jan 5, 2012 at 4:16 PM, Vladimir Rodionov <
> [email protected]
> > > wrote:
> >
> >> Ted,
> >>
> >> "fs.default.name" is a standard config property name which is described
> >> here:
> >> http://hadoop.apache.org/common/docs/current/core-default.html
> >>
> >> It is not CDH -specific. If you are right than this tool has never been
> >> tested.
> >>
> >> Best regards,
> >> Vladimir Rodionov
> >> Principal Platform Engineer
> >> Carrier IQ, www.carrieriq.com
> >> e-mail: [email protected]
> >>
> >> ________________________________________
> >> From: Ted Yu [[email protected]]
> >> Sent: Thursday, January 05, 2012 4:06 PM
> >> To: [email protected]
> >> Subject: Re: OfflineMetaRepair?
> >>
> >> Vlad:
> >> In the future, please drop unrelated discussion from bottom of your
> email.
> >>
> >> I think what you saw was caused by FS default name not being set
> >> correctly.
> >> In hbck:
> >>        conf.set("fs.defaultFS", conf.get(HConstants.HBASE_DIR));
> >> But cdh3 uses:
> >>    conf.set("fs.default.name", "hdfs://localhost:0");
> >> ./src/test/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
> >>
> >> You can try adding the following line after line 77 of
> >> OfflineMetaRepair.java:
> >>    conf.set("fs.default.name", path);
> >> and rebuilding hbase 0.90.6 (tip of 0.92 branch)
> >>
> >> If the above works, please file a JIRA.
> >>
> >> Thanks
> >>
> >> On Thu, Jan 5, 2012 at 3:30 PM, Vladimir Rodionov
> >> <[email protected]>wrote:
> >>
> >> > 0.90.5
> >> >
> >> > I am trying to repair .META. table using this tool
> >> >
> >> > 1.  HBase cluster was shutdown
> >> >
> >> > Then I ran:
> >> >
> >> > 2. [name01 bin]$ hbase
> >> org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
> >> > -base hdfs://us01-ciqps1-name01.carrieriq.com:9000/hbase -details
> >> >
> >> >
> >> > This is waht I got:
> >> >
> >> > 12/01/05 23:23:15 INFO util.HBaseFsck: Loading HBase regioninfo from
> >> > HDFS...
> >> > 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to:
> >> > java.lang.IllegalArgumentException: Wrong FS: hdfs://
> >> >
> >>
> us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-1325190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo
> >> ,
> >> > expected: file:///
> >> >        at
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352)
> >> >        at
> >> >
> >>
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
> >> >        at
> >> >
> >>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368)
> >> >        at
> >> >
> >>
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
> >> >        at
> >> >
> >>
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:126)
> >> >        at
> >> >
> >>
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284)
> >> >        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398)
> >> >        at
> >> >
> org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
> >> >        at
> >> >
> org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
> >> >        at
> >> > org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
> >> >
> >> >
> >> > Q: What am I doing wrong?
> >> >
> >> > Best regards,
> >> > Vladimir Rodionov
> >> > Principal Platform Engineer
> >> > Carrier IQ, www.carrieriq.com
> >> > e-mail: [email protected]
> >> >
> >> >
> >>
> >> Confidentiality Notice:  The information contained in this message,
> >> including any attachments hereto, may be confidential and is intended
> to be
> >> read only by the individual or entity to whom this message is
> addressed. If
> >> the reader of this message is not the intended recipient or an agent or
> >> designee of the intended recipient, please note that any review, use,
> >> disclosure or distribution of this message or its attachments, in any
> form,
> >> is strictly prohibited.  If you have received this message in error,
> please
> >> immediately notify the sender and/or [email protected] and
> >> delete or destroy any copy of this message and its attachments.
> >>
> >
> >
> >
> > --
> > // Jonathan Hsieh (shay)
> > // Software Engineer, Cloudera
> > // [email protected]
> >
> >
> >
>
>
> --
> // Jonathan Hsieh (shay)
> // Software Engineer, Cloudera
> // [email protected]
>

Reply via email to