At last I move one step further. It was a problem with the hadoop jar file. I
need to replace hadoop-core-xx.jar in base/lib with hadoop/lib.
After replacing it I got following error:
2011-10-14 17:09:12,409 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled
exception. Starting shutdown.
java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
at
org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
at
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
at
org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
at
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
at
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
at
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1436)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1337)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
at
org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282)
at
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:193)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.ClassNotFoundException:
org.apache.commons.configuration.Configuration
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 22 more
On Oct 14, 2011, at 3:35 PM, Jignesh Patel wrote:
> Can somebody help me to work Hadoop 0.20.205.0 and Hbase 0.90.4 in pseudo
> mode. This is third day in a row and I am not able to make it run.
>
> The details are as follows
>
> http://pastebin.com/KrJePt64
>
>
> If this is not going to work then let me know which version I should use to
> get it run.
>
> On Oct 14, 2011, at 2:46 PM, Jignesh Patel wrote:
>
>>
>> On Oct 14, 2011, at 2:44 PM, Jignesh Patel wrote:
>>
>>> According to start-hase.sh if distributed mode=flase then I am supposed to
>>> start only masters it doesn't required to start zookeeper, see the script
>>> below from the file.
>>>
>>> if [ "$distMode" == 'false' ]
>>> then
>>> "$bin"/hbase-daemon.sh start master
>>> else
>>> "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" start zookeeper
>>> "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" start master
>>> "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
>>> --hosts "${HBASE_REGIONSERVERS}" start regionserver
>>> "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
>>> --hosts "${HBASE_BACKUP_MASTERS}" start master-backup
>>> fi
>>>
>>> According to above script the zookeeper is not required to start as I am
>>> not running server in distributed mode but in pseudo mode. But then it is
>>> giving error for zookeeper is not able to connect.
>>
>> -Jignesh
>>>
>>>
>>> is supposed to start zookeeper and master as per the
>>>
>>> On Fri, Oct 14, 2011 at 2:31 AM, Ramya Sunil [via Lucene]
>>> <[email protected]> wrote:
>>> Jignesh,
>>>
>>> I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205.
>>> Below are the steps I followed:
>>>
>>> 1. Make sure none of hbasemaster, regionservers or zookeeper are running.
>>> As
>>> Matt pointed out, turn on append.
>>> 2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper
>>> 3. hbase-daemon.sh --config $HBASE_CONF_DIR start master
>>> 4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver
>>> 5. hbase --config $HBASE_CONF_DIR shell
>>>
>>>
>>> Hope it helps.
>>> Ramya
>>>
>>>
>>>
>>> On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel <[hidden email]> wrote:
>>>
>>> > Is there a way to resolve this weird problem.
>>> >
>>> > > bin/hbase-start.sh is supposed to start zookeeper but it doesn't start.
>>> > But on the other side if zookeeper up and running then it says
>>> >
>>> > > Couldnt start ZK at requested address of 2181, instead got: 2182.
>>> > Aborting. Why? Because clients (eg shell) wont be able to find this ZK
>>> > quorum
>>> >
>>> >
>>> >
>>> > On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote:
>>> >
>>> > > Ok now the problem is
>>> > >
>>> > > if I only use bin/hbase-start.sh then it doesn't start zookeeper.
>>> > >
>>> > > But if I use bin/hbase-daemon.sh start zookeeper before starting
>>> > bin/hbase-start.sh then it will try to start zookeeper at port 2181 and
>>> > then
>>> > I have following error.
>>> > >
>>> > > Couldnt start ZK at requested address of 2181, instead got: 2182.
>>> > Aborting. Why? Because clients (eg shell) wont be able to find this ZK
>>> > quorum
>>> > >
>>> > >
>>> > > So I am wondering if bin/hbase-start.sh is trying to start zookeeper
>>> > > then
>>> > while zookeeper is not running it should start the zookeeper. I only get
>>> > the
>>> > error if zookeeper already running.
>>> > >
>>> > >
>>> > > -Jignesh
>>> > >
>>> > >
>>> > > On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:
>>> > >
>>> > >> You already have zookeeper running on 2181 according to your jps
>>> > >> output.
>>> > >> That is the reason, master seems to be complaining.
>>> > >> Can you please stop zookeeper, verify that no daemons are running on
>>> > 2181
>>> > >> and restart your master?
>>> > >>
>>> > >> On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel <[hidden email]>
>>> > wrote:
>>> > >>
>>> > >>> Ramya,
>>> > >>>
>>> > >>>
>>> > >>> Based on "Hbase the definite guide" it seems zookeeper being started
>>> > >>> by
>>> > >>> hbase no need to start it separately(may be this is changed for
>>> > >>> 0.90.4.
>>> > >>> Anyways now following is the updated status.
>>> > >>>
>>> > >>> Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
>>> > >>> starting master, logging to
>>> > >>>
>>> > /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
>>> >
>>> > >>> Couldnt start ZK at requested address of 2181, instead got: 2182.
>>> > Aborting.
>>> > >>> Why? Because clients (eg shell) wont be able to find this ZK quorum
>>> > >>> Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
>>> > >>> 41486 HQuorumPeer
>>> > >>> 38814 SecondaryNameNode
>>> > >>> 41578 Jps
>>> > >>> 38878 JobTracker
>>> > >>> 38726 DataNode
>>> > >>> 38639 NameNode
>>> > >>> 38964 TaskTracker
>>> > >>>
>>> > >>> On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
>>> > >>>
>>> > >>>> Jignesh,
>>> > >>>>
>>> > >>>> I dont see zookeeper running on your master. My cluster reads the
>>> > >>> following:
>>> > >>>>
>>> > >>>> $ jps
>>> > >>>> 15315 Jps
>>> > >>>> 13590 HMaster
>>> > >>>> 15235 HQuorumPeer
>>> > >>>>
>>> > >>>> Can you please shutdown your Hmaster and run the following first:
>>> > >>>> $ hbase-daemon.sh start zookeeper
>>> > >>>>
>>> > >>>> And then start your hbasemaster and regionservers?
>>> > >>>>
>>> > >>>> Thanks
>>> > >>>> Ramya
>>> > >>>>
>>> > >>>> On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel <[hidden email]>
>>> > >>> wrote:
>>> > >>>>
>>> > >>>>> ok --config worked but it is showing me same error. How to resolve
>>> > this.
>>> > >>>>>
>>> > >>>>> http://pastebin.com/UyRBA7vX
>>> > >>>>>
>>> > >>>>> On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
>>> > >>>>>
>>> > >>>>>> Hi Jignesh,
>>> > >>>>>>
>>> > >>>>>> "--config" (i.e. - - config) is the option to use and not
>>> > >>>>>> "-config".
>>> > >>>>>> Alternatively you can also set HBASE_CONF_DIR.
>>> > >>>>>>
>>> > >>>>>> Below is the exact command line:
>>> > >>>>>>
>>> > >>>>>> $ hbase --config /home/ramya/hbase/conf shell
>>> > >>>>>> hbase(main):001:0> create 'newtable','family'
>>> > >>>>>> 0 row(s) in 0.5140 seconds
>>> > >>>>>>
>>> > >>>>>> hbase(main):002:0> list 'newtable'
>>> > >>>>>> TABLE
>>> > >>>>>> newtable
>>> > >>>>>> 1 row(s) in 0.0120 seconds
>>> > >>>>>>
>>> > >>>>>> OR
>>> > >>>>>>
>>> > >>>>>> $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
>>> > >>>>>> $ hbase shell
>>> > >>>>>>
>>> > >>>>>> hbase(main):001:0> list 'newtable'
>>> > >>>>>> TABLE
>>> > >>>>>>
>>> > >>>>>> newtable
>>> > >>>>>>
>>> > >>>>>> 1 row(s) in 0.3860 seconds
>>> > >>>>>>
>>> > >>>>>>
>>> > >>>>>> Thanks
>>> > >>>>>> Ramya
>>> > >>>>>>
>>> > >>>>>>
>>> > >>>>>> On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel <
>>> > >>> [hidden email]
>>> > >>>>>> wrote:
>>> > >>>>>>
>>> > >>>>>>> There is no command like -config see below
>>> > >>>>>>>
>>> > >>>>>>> Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config
>>> > >>> ./config
>>> > >>>>>>> shell
>>> > >>>>>>> Unrecognized option: -config
>>> > >>>>>>> Could not create the Java virtual machine.
>>> > >>>>>>>
>>> > >>>>>>> --
>>> > >>>>>>> View this message in context:
>>> > >>>>>>>
>>> > >>>>>
>>> > >>>
>>> > http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
>>> > >>>>>>> Sent from the Hadoop lucene-users mailing list archive at
>>> > Nabble.com.
>>> > >>>>>>>
>>> > >>>>>
>>> > >>>>>
>>> > >>>
>>> > >>>
>>> > >
>>> >
>>> >
>>>
>>>
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3420865.html
>>> To unsubscribe from Hbase with Hadoop, click here.
>>>
>>
>