Looks like you need to copy to hbase a commons config jar; this
version of hadoop seems to depend on it:

java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration

And you are clear that this version of hadoop does not have
sync/append so hbase will lose data on crash.
St.Ack


On Wed, Jun 8, 2011 at 10:42 AM, Ratner, Alan S (IS)
<[email protected]> wrote:
> J-D,
>
>   Thanks for the info.  I copied the appropriate hadoop jar file to the lib 
> directory (and renamed the original one).  I wasn't able to figure out why 
> zookeeper wasn't running on my master server so I launched zookeeper directly 
> and set HBASE_MANAGES_ZK to false.  (And since I am running zoo 3.3.3 I 
> copied its jar file to the lib directory.)  But it still seems I have some 
> sort of disconnect between HBase and Zookeeper.
>
>    Any advice would be appreciated. Thanks.
>
>
>   Zookeeper issues the following warning:
> 2011-06-06 10:43:54,541 - WARN  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@634] - 
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x13064fb5fea0001, likely client has closed socket
> 2011-06-06 10:43:54,542 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed 
> socket connection for client /10.64.155.56:42561 which had sessionid 
> 0x13064fb5fea0001
> 2011-06-06 10:44:50,644 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.54:50653
> 2011-06-06 10:44:50,648 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.54:50653
> 2011-06-06 10:44:50,656 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.56:41696
> 2011-06-06 10:44:50,659 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0002 with negotiated timeout 40000 for 
> client /10.64.155.54:50653
> 2011-06-06 10:44:50,661 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.57:39448
> 2011-06-06 10:44:50,662 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.56:41696
> 2011-06-06 10:44:50,665 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.57:39448
> 2011-06-06 10:44:50,676 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0003 with negotiated timeout 40000 for 
> client /10.64.155.56:41696
> 2011-06-06 10:44:50,680 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0004 with negotiated timeout 40000 for 
> client /10.64.155.57:39448
> 2011-06-06 10:44:50,801 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.54:50654
> 2011-06-06 10:44:50,801 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.53:35203
> 2011-06-06 10:44:50,802 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.54:50654
> 2011-06-06 10:44:50,802 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.53:35203
> 2011-06-06 10:44:50,807 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - 
> Accepted socket connection from /10.64.155.56:41697
> 2011-06-06 10:44:50,808 - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@777] - Client 
> attempting to establish new session at /10.64.155.56:41697
> 2011-06-06 10:44:50,813 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0005 with negotiated timeout 40000 for 
> client /10.64.155.54:50654
> 2011-06-06 10:44:50,823 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0006 with negotiated timeout 40000 for 
> client /10.64.155.53:35203
> 2011-06-06 10:44:50,824 - INFO  [CommitProcessor:1:NIOServerCnxn@1580] - 
> Established session 0x13064fb5fea0007 with negotiated timeout 40000 for 
> client /10.64.155.56:41697
>
>
> HBASE MASTER LOG
> ----------------
> Mon Jun  6 10:44:48 EDT 2011 Starting master on hadoop1 ulimit -n 1024
> 2011-06-06 10:44:49,333 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: 
> Initializing RPC Metrics with hostName=HMaster, port=60000
> 2011-06-06 10:44:49,398 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder: starting
> 2011-06-06 10:44:49,398 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> listener on 60000: starting
> 2011-06-06 10:44:49,399 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 0 on 60000: starting
> 2011-06-06 10:44:49,399 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 1 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 2 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 3 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 4 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 5 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 6 on 60000: starting
> 2011-06-06 10:44:49,400 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 7 on 60000: starting
> 2011-06-06 10:44:49,401 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 8 on 60000: starting
> 2011-06-06 10:44:49,401 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 9 on 60000: starting
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:host.name=hadoop1.aj.c2fse.northgrum.com
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.version=1.6.0_25
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.vendor=Sun Microsystems Inc.
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.class.path=/home/ngc/hbase-0.90.3/conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hbase-0.90.3/bin/..:/home/ngc/hbase-0.90.3/bin/../hbase-0.90.3.jar:/home/ngc/hbase-0.90.3/bin/../hbase-0.90.3-tests.jar:/home/ngc/hbase-0.90.3/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/avro-1.3.3.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/guava-r06.jar:/home/ngc/hbase-0.90.3/bin/../lib/hadoop-core-0.20.203.0.jar:/home/ngc/hbase-0.90.3/bin/../lib/jackson-core-asl-1.5.5.jar:/home/ngc/hbase-0.90.3/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/ngc/hbase-0.90.3/bin/../lib/jackson-mapper-asl-1.4.2.jar:/home/ngc/hbase-0.90.3/bin/../lib/jackson-xc-1.5.5.jar:/home/ngc/hbase-0.90.3/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.90.3/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.90.3/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/jaxb-impl-2.1.12.jar:/home/ngc/hbase-0.90.3/bin/../lib/jersey-core-1.4.jar:/home/ngc/hbase-0.90.3/bin/../lib/jersey-json-1.4.jar:/home/ngc/hbase-0.90.3/bin/../lib/jersey-server-1.4.jar:/home/ngc/hbase-0.90.3/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.90.3/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.90.3/bin/../lib/jruby-complete-1.6.0.jar:/home/ngc/hbase-0.90.3/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.90.3/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.90.3/bin/../lib/jsr311-api-1.1.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.90.3/bin/../lib/protobuf-java-2.3.0.jar:/home/ngc/hbase-0.90.3/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.90.3/bin/../lib/slf4j-api-1.5.8.jar:/home/ngc/hbase-0.90.3/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/ngc/hbase-0.90.3/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.90.3/bin/../lib/thrift-0.2.0.jar:/home/ngc/hbase-0.90.3/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.90.3/bin/../lib/zookeeper-3.3.3.jar
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.library.path=/home/ngc/jdk1.6.0_25/jre/lib/amd64/server:/home/ngc/jdk1.6.0_25/jre/lib/amd64:/home/ngc/jdk1.6.0_25/jre/../lib/amd64:/home/ngc/OpenCV-2.2.0/OpenCV/library:/home/ngc/ffmpeg-0.6.1/:/home/ngc/OpenCV-2.2.0/lib:/usr/local/:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.io.tmpdir=/tmp
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.compiler=<NA>
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.name=Linux
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.arch=amd64
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.version=2.6.32-24-server
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.name=ngc
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.home=/home/ngc
> 2011-06-06 10:44:49,418 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.dir=/home/ngc/hbase-0.90.3
> 2011-06-06 10:44:49,419 INFO org.apache.zookeeper.ZooKeeper: Initiating 
> client connection, connectString=hadoop3:2181,hadoop2:2181,hadoop1:2181 
> sessionTimeout=180000 watcher=master:60000
> 2011-06-06 10:44:49,440 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
> connection to server hadoop2/10.64.155.53:2181
> 2011-06-06 10:44:49,446 INFO org.apache.zookeeper.ClientCnxn: Socket 
> connection established to hadoop2/10.64.155.53:2181, initiating session
> 2011-06-06 10:44:49,467 INFO org.apache.zookeeper.ClientCnxn: Session 
> establishment complete on server hadoop2/10.64.155.53:2181, sessionid = 
> 0x23064fd26ab0005, negotiated timeout = 40000
> 2011-06-06 10:44:49,480 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
> Initializing JVM Metrics with processName=Master, 
> sessionId=hadoop1.aj.c2fse.northgrum.com:60000
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: revision
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: hdfsUser
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: hdfsDate
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: hdfsUrl
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: date
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: hdfsRevision
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: user
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: hdfsVersion
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: url
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: MetricsString 
> added: version
> 2011-06-06 10:44:49,489 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2011-06-06 10:44:49,490 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2011-06-06 10:44:49,490 INFO 
> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
> 2011-06-06 10:44:49,506 INFO 
> org.apache.hadoop.hbase.master.ActiveMasterManager: 
> Master=hadoop1.aj.c2fse.northgrum.com:60000
> 2011-06-06 10:44:49,620 FATAL org.apache.hadoop.hbase.master.HMaster: 
> Unhandled exception. Starting shutdown.
> java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
>        at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>        at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>        at 
> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>        at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
>        at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>        at 
> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>        at 
> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
>        at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
>        at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>        at 
> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>        at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
>        at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
>        at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1418)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1319)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:344)
>        at 
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
>        at 
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:347)
>        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:283)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.configuration.Configuration
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>        ... 20 more
> 2011-06-06 10:44:49,622 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
> 2011-06-06 10:44:49,622 DEBUG org.apache.hadoop.hbase.master.HMaster: 
> Stopping service threads
> 2011-06-06 10:44:49,622 INFO org.apache.hadoop.ipc.HBaseServer: Stopping 
> server on 60000
> 2011-06-06 10:44:49,622 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 0 on 60000: exiting
> 2011-06-06 10:44:49,622 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 2 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 8 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 7 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 9 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 5 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC 
> Server listener on 60000
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 1 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 3 on 60000: exiting
> 2011-06-06 10:44:49,623 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 4 on 60000: exiting
> 2011-06-06 10:44:49,630 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 6 on 60000: exiting
> 2011-06-06 10:44:49,630 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC 
> Server Responder
> 2011-06-06 10:44:49,658 INFO org.apache.zookeeper.ZooKeeper: Session: 
> 0x23064fd26ab0005 closed
> 2011-06-06 10:44:49,659 INFO org.apache.hadoop.hbase.master.HMaster: HMaster 
> main thread exiting
> 2011-06-06 10:44:49,659 INFO org.apache.zookeeper.ClientCnxn: EventThread 
> shut down
>
>
> #
> #/**
> # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the 
> Apache Software Foundation (ASF) under one # * or more contributor license 
> agreements.  See the NOTICE file # * distributed with this work for 
> additional information # * regarding copyright ownership.  The ASF licenses 
> this file # * to you under the Apache License, Version 2.0 (the # * 
> "License"); you may not use this file except in compliance # * with the 
> License.  You may obtain a copy of the License at # *
> # *     http://www.apache.org/licenses/LICENSE-2.0
> # *
> # * Unless required by applicable law or agreed to in writing, software # * 
> distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT 
> WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # * See the License for the specific language governing permissions and # * 
> limitations under the License.
> # */
>
> # Set environment variables here.
>
> # The java implementation to use.  Java 1.6 required.
> export JAVA_HOME=/home/ngc/jdk1.6.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HBASE_CLASSPATH=
>
> # The maximum amount of heap to use, in MB. Default is 1000.
> export HBASE_HEAPSIZE=8000
>
> # Extra Java runtime options.
> # Below are what we set by default.  May only work with SUN JVM.
> # For more on why as well as other possible settings, # see 
> http://wiki.apache.org/hadoop/PerformanceTuning
> export HBASE_OPTS="-ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
>
> # Uncomment below to enable java garbage collection logging.
> # export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails 
> -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"
>
> # Uncomment and adjust to enable JMX exporting # See jmxremote.password and 
> jmxremote.access in $JRE_HOME/lib/management to configure remote password 
> access.
> # More details at: 
> http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
> #
> # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.authenticate=false"
> # export HBASE_MASTER_OPTS="$HBASE_JMX_BASE 
> -Dcom.sun.management.jmxremote.port=10101 -javaagent:lib/HelloWorldAgent.jar"
> # export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_BASE 
> -Dcom.sun.management.jmxremote.port=10102"
> # export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE 
> -Dcom.sun.management.jmxremote.port=10103"
> # export HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE 
> -Dcom.sun.management.jmxremote.port=10104"
>
> # File naming hosts on which HRegionServers will run.  
> $HBASE_HOME/conf/regionservers by default.
> # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
>
> # Extra ssh options.  Empty by default.
> # export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
>
> # Where log files are stored.  $HBASE_HOME/logs by default.
> # export HBASE_LOG_DIR=${HBASE_HOME}/logs export 
> HBASE_LOG_DIR=/tmp/hbase-ngc/logs
>
> # A string representing this instance of hbase. $USER by default.
> # export HBASE_IDENT_STRING=$USER
>
> # The scheduling priority for daemon processes.  See 'man nice'.
> # export HBASE_NICENESS=10
>
> # The directory where pid files are stored. /tmp by default.
> # export HBASE_PID_DIR=/var/hadoop/pids
>
> # Seconds to sleep between slave commands.  Unset by default.  This # can be 
> useful in large clusters, where, e.g., slave rsyncs can # otherwise arrive 
> faster than the master can service them.
> # export HBASE_SLAVE_SLEEP=0.1
>
> # Tell HBase whether it should manage it's own instance of Zookeeper or not.
> export HBASE_MANAGES_ZK=false
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!--
> /**
>  * Copyright 2010 The Apache Software Foundation
>  *
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  *     http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> -->
> <configuration>
>
>  <property>
>    <name>hbase.cluster.distributed</name>
>    <value>true</value>
>    <description>The mode the cluster will be in. Possible values are
>      false: standalone and pseudo-distributed setups with managed Zookeeper
>      true: fully-distributed with unmanaged Zookeeper Quorum (see 
> hbase-env.sh)
>    </description>
>  </property>
>
>        <property>
>            <name>hbase.master</name>
>            <value>hadoop1:60000</value>
>            <description>The host and port that the HBase master runs at.
>                A value of 'local' runs the master and a regionserver in
>                a single process.
>            </description>
>        </property>
>
>  <property>
>    <name>hbase.rootdir</name>
>    <value>hdfs://hadoop1:9000/hbase</value>
>  </property>
>
>  <property>
>    <name>hbase.zookeeper.quorum</name>
>    <value>hadoop1,hadoop2,hadoop3</value>
> </property>
>
> </configuration>
>
> Alan
>
> -----Original Message-----
> From: [email protected]<mailto:[email protected]> 
> [mailto:[email protected]]<mailto:[mailto:[email protected]]> On Behalf Of 
> Jean-Daniel Cryans
> Sent: Thursday, June 02, 2011 1:28 PM
> To: [email protected]<mailto:[email protected]>
> Subject: EXT :Re: Failure to Launch: hbase-0.90.3 with hadoop-0.20.203.0
>
> The zk stuff is ok, it's just that hadoop1 doesn't have a zk server but 
> hadoop2 does (so review your configuration).
>
> You need to replace the hadoop jar since right now you have 
> /hadoop-core-0.20-append-r1056497.jar
>
> Like the doc says http://hbase.apache.org/book.html#hadoop
>
> " It is critical that the version of Hadoop that is out on your cluster 
> matches what is Hbase match. Replace the hadoop jar found in the HBase lib 
> directory with the hadoop jar you are running out on your cluster to avoid 
> version mismatch issues."
>
> J-D
>
>
>
>

Reply via email to