You should be able to grab that from the core-site and add it into you hbase-site. On Nov 18, 2012 6:34 PM, "Jean-Marc Spaggiari" <[email protected]> wrote:
> Seems it has been discussed a year ago ;) > > http://www.mail-archive.com/[email protected]/msg11383.html > > It looks like I need to setup fs.default.name ... I tried few > different values, but even so it's not working. What should this value > look like? I used hdfs://localhost/ and the same with the master (also > hosting the namenode) with no success. Any idea what it should be? > > Thanks, > > JM > > 2012/11/18, Jean-Marc Spaggiari <[email protected]>: > > Hum > > > > I tried the --config parameters but I donc think bin/hbase is reading it. > > > > I have exported HBASE_CONFIG but same result. > > > > Does anyone have succeed in a merge recently? > > > > I will try to google that to see if there is something I missed > > > > 2012/11/18, Jean-Marc Spaggiari <[email protected]>: > >> And that's the issue. > >> > >> I have an 8 nodes cluster, with 3 ZK servers, everything working fine. > >> I just ran a MP last week which has generated a 400M rows table. But I > >> hae wrongly pre-split the table. so I want to merge and re-split it. > >> > >> I tried with the cluster on and it has detected it and stopped the > merge. > >> I stoped de cluster but it's trying to access the file locally? I ran > >> the merge on the master server directly from the HBase directory. > >> > >> I will take a look at see if there is any other parameter I should > >> pass, or if I missed something... > >> > >> JM > >> > >> 2012/11/18, Kevin O'dell <[email protected]>: > >>> Rather it is expecting local: > >>> > >>> 12/11/18 08:54:05 FATAL util.Merge: Merge failed > >>> java.lang.IllegalArgumentException: Wrong FS: > >>> hdfs://node3:9000/hbase/-ROOT-/70236052/.regioninfo, expected: > >>> file:///at > >>> org.apache.hadoop.fs. FileSystem.checkPath(FileSystem.java:381) > >>> On Nov 18, 2012 10:37 AM, "Kevin O'dell" <[email protected]> > >>> wrote: > >>> > >>>> Looks like your configuration is pointing local. > >>>> On Nov 18, 2012 8:59 AM, "Jean-Marc Spaggiari" > >>>> <[email protected]> > >>>> wrote: > >>>> > >>>>> Hi, > >>>>> > >>>>> I'm trying to merge 2 regions based on the book documentation and I'm > >>>>> getting the exception at the bottom of this message. Any idea why? > >>>>> > >>>>> I got the regions names from the HTML UI. I tried also to scan the > >>>>> .META. table like in Lars' book. Both are giving me the same info (a > >>>>> chance!), but it's still not working. > >>>>> > >>>>> Also, the HBase online book is not giving the right class info for > the > >>>>> merge. I will open a Jira and propose a patch. > >>>>> > >>>>> Thanks, > >>>>> > >>>>> JM > >>>>> > >>>>> hbase@node3:~/hbase-0.94.2$ bin/hbase > >>>>> org.apache.hadoop.hbase.util.Merge work_proposed > >>>>> work_proposed,\x0A,1342541226467.19929d1e6b6ecb3beae91e316b790378. > >>>>> work_proposed,\x14,1342541226467.622a3e62c4cf3f3b59655bdaf915908d. > >>>>> 12/11/18 08:54:05 INFO util.Merge: Verifying that file system is > >>>>> available... > >>>>> 12/11/18 08:54:05 INFO util.Merge: Verifying that HBase is not > >>>>> running... > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 > 10:48 > >>>>> GMT > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:host.name > >>>>> =node3 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:java.version=1.7.0_05 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:java.vendor=Oracle Corporation > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:java.home=/usr/local/jdk1.7.0_05/jre > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> > >>>>> > environment:java.class.path=/home/hbase/hbase-0.94.2//conf:/usr/local/jdk1.7.0_05/lib/tools.jar:/home/hbase/hbase-0.94.2/:/home/hbase/hbase-0.94.2//hbase-0.94.2.jar:/home/hbase/hbase-0.94.2//hbase-0.94.2-tests.jar:/home/hbase/hbase-0.94.2//lib/activation-1.1.jar:/home/hbase/hbase-0.94.2//lib/asm-3.1.jar:/home/hbase/hbase-0.94.2//lib/avro-1.5.3.jar:/home/hbase/hbase-0.94.2//lib/avro-ipc-1.5.3.jar:/home/hbase/hbase-0.94.2//lib/commons-beanutils-1.7.0.jar:/home/hbase/hbase-0.94.2//lib/commons-beanutils-core-1.8.0.jar:/home/hbase/hbase-0.94.2//lib/commons-cli-1.2.jar:/home/hbase/hbase-0.94.2//lib/commons-codec-1.4.jar:/home/hbase/hbase-0.94.2//lib/commons-collections-3.2.1.jar:/home/hbase/hbase-0.94.2//lib/commons-configuration-1.6.jar:/home/hbase/hbase-0.94.2//lib/commons-digester-1.8.jar:/home/hbase/hbase-0.94.2//lib/commons-el-1.0.jar:/home/hbase/hbase-0.94.2//lib/commons-httpclient-3.1.jar:/home/hbase/hbase-0.94.2//lib/commons-io-2.1.jar:/home/hbase/hbase-0.94.2//lib/commons-lang-2.5.jar:/home/hbase/hbase-0.94.2//lib/commons-logging-1.1.1.jar:/home/hbase/hbase-0.94.2//lib/commons-math-2.1.jar:/home/hbase/hbase-0.94.2//lib/commons-net-1.4.1.jar:/home/hbase/hbase-0.94.2//lib/core-3.1.1.jar:/home/hbase/hbase-0.94.2//lib/guava-11.0.2.jar:/home/hbase/hbase-0.94.2//lib/hadoop-core-1.0.3.jar:/home/hbase/hbase-0.94.2//lib/high-scale-lib-1.1.1.jar:/home/hbase/hbase-0.94.2//lib/httpclient-4.1.2.jar:/home/hbase/hbase-0.94.2//lib/httpcore-4.1.3.jar:/home/hbase/hbase-0.94.2//lib/jackson-core-asl-1.8.8.jar:/home/hbase/hbase-0.94.2//lib/jackson-jaxrs-1.8.8.jar:/home/hbase/hbase-0.94.2//lib/jackson-mapper-asl-1.8.8.jar:/home/hbase/hbase-0.94.2//lib/jackson-xc-1.8.8.jar:/home/hbase/hbase-0.94.2//lib/jamon-runtime-2.3.1.jar:/home/hbase/hbase-0.94.2//lib/jasper-compiler-5.5.23.jar:/home/hbase/hbase-0.94.2//lib/jasper-runtime-5.5.23.jar:/home/hbase/hbase-0.94.2//lib/jaxb-api-2.1.jar:/home/hbase/hbase-0.94.2//lib/jaxb-impl-2.2.3-1.jar:/home/hbase/hbase-0.94.2//lib/jersey-core-1.8.jar:/home/hbase/hbase-0.94.2//lib/jersey-json-1.8.jar:/home/hbase/hbase-0.94.2//lib/jersey-server-1.8.jar:/home/hbase/hbase-0.94.2//lib/jettison-1.1.jar:/home/hbase/hbase-0.94.2//lib/jetty-6.1.26.jar:/home/hbase/hbase-0.94.2//lib/jetty-util-6.1.26.jar:/home/hbase/hbase-0.94.2//lib/jruby-complete-1.6.5.jar:/home/hbase/hbase-0.94.2//lib/jsp-2.1-6.1.14.jar:/home/hbase/hbase-0.94.2//lib/jsp-api-2.1-6.1.14.jar:/home/hbase/hbase-0.94.2//lib/jsr305-1.3.9.jar:/home/hbase/hbase-0.94.2//lib/junit-4.10-HBASE-1.jar:/home/hbase/hbase-0.94.2//lib/libthrift-0.8.0.jar:/home/hbase/hbase-0.94.2//lib/log4j-1.2.16.jar:/home/hbase/hbase-0.94.2//lib/metrics-core-2.1.2.jar:/home/hbase/hbase-0.94.2//lib/netty-3.2.4.Final.jar:/home/hbase/hbase-0.94.2//lib/protobuf-java-2.4.0a.jar:/home/hbase/hbase-0.94.2//lib/servlet-api-2.5-6.1.14.jar:/home/hbase/hbase-0.94.2//lib/slf4j-api-1.4.3.jar:/home/hbase/hbase-0.94.2//lib/slf4j-log4j12-1.4.3.jar:/home/hbase/hbase-0.94.2//lib/snappy-java-1.0.3.2.jar:/home/hbase/hbase-0.94.2//lib/stax-api-1.0.1.jar:/home/hbase/hbase-0.94.2//lib/velocity-1.7.jar:/home/hbase/hbase-0.94.2//lib/xmlenc-0.52.jar:/home/hbase/hbase-0.94.2//lib/zookeeper-3.4.3.jar: > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> > >>>>> > environment:java.library.path=/home/hbase/hbase-0.94.2//lib/native/Linux-amd64-64 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:java.io.tmpdir=/tmp > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:java.compiler=<NA> > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client environment: > os.name > >>>>> =Linux > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:os.arch=amd64 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:os.version=3.2.0-4-amd64 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:user.name > >>>>> =hbase > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:user.home=/home/hbase > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Client > >>>>> environment:user.dir=/home/hbase/hbase-0.94.2 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Initiating client > >>>>> connection, connectString=latitude:2181,cube:2181,node3:2181 > >>>>> sessionTimeout=180000 watcher=hconnection > >>>>> 12/11/18 08:54:05 INFO zookeeper.ClientCnxn: Opening socket > connection > >>>>> to server /192.168.23.1:2181 > >>>>> 12/11/18 08:54:05 INFO zookeeper.RecoverableZooKeeper: The identifier > >>>>> of this process is 7006@node3 > >>>>> 12/11/18 08:54:05 INFO client.ZooKeeperSaslClient: Client will not > >>>>> SASL-authenticate because the default JAAS configuration section > >>>>> 'Client' could not be found. If you are not using SASL, you may > ignore > >>>>> this. On the other hand, if you expected SASL to work, please fix > your > >>>>> JAAS configuration. > >>>>> 12/11/18 08:54:05 INFO zookeeper.ClientCnxn: Socket connection > >>>>> established to cube/192.168.23.1:2181, initiating session > >>>>> 12/11/18 08:54:05 INFO zookeeper.ClientCnxn: Session establishment > >>>>> complete on server cube/192.168.23.1:2181, sessionid = > >>>>> 0x13b119988410030, negotiated timeout = 40000 > >>>>> 12/11/18 08:54:05 INFO > >>>>> client.HConnectionManager$HConnectionImplementation: ZooKeeper > >>>>> available but no active master location found > >>>>> 12/11/18 08:54:05 INFO > >>>>> client.HConnectionManager$HConnectionImplementation: getMaster > attempt > >>>>> 0 of 1 failed; no more retrying. > >>>>> org.apache.hadoop.hbase.MasterNotRunningException > >>>>> at > >>>>> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:674) > >>>>> at > >>>>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:110) > >>>>> at > >>>>> > org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1659) > >>>>> at org.apache.hadoop.hbase.util.Merge.run(Merge.java:94) > >>>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >>>>> at org.apache.hadoop.hbase.util.Merge.main(Merge.java:387) > >>>>> 12/11/18 08:54:05 INFO > >>>>> client.HConnectionManager$HConnectionImplementation: Closed zookeeper > >>>>> sessionid=0x13b119988410030 > >>>>> 12/11/18 08:54:05 INFO zookeeper.ZooKeeper: Session: > 0x13b119988410030 > >>>>> closed > >>>>> 12/11/18 08:54:05 INFO zookeeper.ClientCnxn: EventThread shut down > >>>>> 12/11/18 08:54:05 INFO util.Merge: Merging regions > >>>>> work_proposed,x0A,1342541226467.19929d1e6b6ecb3beae91e316b790378. and > >>>>> work_proposed,x14,1342541226467.622a3e62c4cf3f3b59655bdaf915908d. in > >>>>> table work_proposed > >>>>> 12/11/18 08:54:05 INFO wal.HLog: FileSystem doesn't support > >>>>> getDefaultBlockSize > >>>>> 12/11/18 08:54:05 INFO wal.HLog: HLog configuration: blocksize=32 MB, > >>>>> rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms > >>>>> 12/11/18 08:54:05 WARN util.NativeCodeLoader: Unable to load > >>>>> native-hadoop library for your platform... using builtin-java classes > >>>>> where applicable > >>>>> 12/11/18 08:54:05 DEBUG wal.SequenceFileLogWriter: using new > >>>>> createWriter -- HADOOP-6840 > >>>>> 12/11/18 08:54:05 DEBUG wal.SequenceFileLogWriter: > >>>>> Path=file:/home/hbase/.logs_1353246845818/hlog.1353246845824, > >>>>> syncFs=true, hflush=false, compression=false > >>>>> 12/11/18 08:54:05 INFO wal.HLog: for > >>>>> /home/hbase/.logs_1353246845818/hlog.1353246845824 > >>>>> 12/11/18 08:54:05 INFO wal.HLog: FileSystem's output stream doesn't > >>>>> support getNumCurrentReplicas; --HDFS-826 not available; > >>>>> fsOut=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer > >>>>> 12/11/18 08:54:05 DEBUG regionserver.HRegion: Opening region: {NAME > => > >>>>> '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052,} > >>>>> 12/11/18 08:54:05 INFO regionserver.HRegion: Setting up > >>>>> tabledescriptor config now ... > >>>>> 12/11/18 08:54:05 DEBUG regionserver.HRegion: Instantiated > >>>>> -ROOT-,,0.70236052 > >>>>> 12/11/18 08:54:05 FATAL util.Merge: Merge failed > >>>>> java.lang.IllegalArgumentException: Wrong FS: > >>>>> hdfs://node3:9000/hbase/-ROOT-/70236052/.regioninfo, expected: > >>>>> file:/// > >>>>> at > >>>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381) > >>>>> at > >>>>> > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) > >>>>> at > >>>>> > org.apache.hadoop.fs.LocalFileSystem.pathToFile(LocalFileSystem.java:61) > >>>>> at > >>>>> org.apache.hadoop.fs.LocalFileSystem.exists(LocalFileSystem.java:51) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:700) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:482) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:461) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3813) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3761) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3721) > >>>>> at > >>>>> > org.apache.hadoop.hbase.util.MetaUtils.openRootRegion(MetaUtils.java:265) > >>>>> at > >>>>> > org.apache.hadoop.hbase.util.MetaUtils.scanRootRegion(MetaUtils.java:197) > >>>>> at > >>>>> org.apache.hadoop.hbase.util.Merge.mergeTwoRegions(Merge.java:205) > >>>>> at org.apache.hadoop.hbase.util.Merge.run(Merge.java:111) > >>>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >>>>> at org.apache.hadoop.hbase.util.Merge.main(Merge.java:387) > >>>>> 12/11/18 08:54:05 DEBUG regionserver.HRegion: Opening region: {NAME > => > >>>>> '.META.,,1', STARTKEY => '', ENDKEY => '', ENCODED => 1028785192,} > >>>>> 12/11/18 08:54:05 INFO regionserver.HRegion: Setting up > >>>>> tabledescriptor config now ... > >>>>> 12/11/18 08:54:05 DEBUG regionserver.HRegion: Instantiated > >>>>> .META.,,1.1028785192 > >>>>> 12/11/18 08:54:05 INFO wal.HLog: main.logSyncer exiting > >>>>> 12/11/18 08:54:05 DEBUG wal.HLog: closing hlog writer in > >>>>> file:/home/hbase/.logs_1353246845818 > >>>>> 12/11/18 08:54:05 DEBUG wal.HLog: Moved 1 log files to > >>>>> /home/hbase/.oldlogs > >>>>> 12/11/18 08:54:05 ERROR util.Merge: exiting due to error > >>>>> java.lang.IllegalArgumentException: Wrong FS: > >>>>> hdfs://node3:9000/hbase/.META./1028785192/.regioninfo, expected: > >>>>> file:/// > >>>>> at > >>>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381) > >>>>> at > >>>>> > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) > >>>>> at > >>>>> > org.apache.hadoop.fs.LocalFileSystem.pathToFile(LocalFileSystem.java:61) > >>>>> at > >>>>> org.apache.hadoop.fs.LocalFileSystem.exists(LocalFileSystem.java:51) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:700) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:482) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:461) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3813) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3761) > >>>>> at > >>>>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3721) > >>>>> at > >>>>> > org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:273) > >>>>> at > >>>>> > org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:257) > >>>>> at org.apache.hadoop.hbase.util.Merge.run(Merge.java:116) > >>>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >>>>> at org.apache.hadoop.hbase.util.Merge.main(Merge.java:387) > >>>>> hbase@node3:~/hbase-0.94.2$ > >>>>> > >>>> > >>> > >> > > >
