Bing,

As per my experience on the configuration I can list down some points one of 
which may be your solution.

- first and foremost don't store your service metadata into system tmp 
directory because it may get cleaned up in every start and you loose all your 
job tracker, datanode information. It is as good as you're formatting your 
namenode.
- if you're using CDH make sure you set up permission perfectly for root, dfs 
data directory and mapred directories.(Refer CDH documentation) 
- I didn't see job tracker in your service list. It should be up and running. 
Check the job tracker log if there is any permission issue when starting job 
tracker and task tracker.
- before trying your stuff on Hbase set up make sure all your Hadoop services 
are up and running. You can check that by running a sample program and check 
whether job tracker, task tracker responding for your mapred.system and 
mapred.local directories to create intermediate files.
- once you have all hadoop services up don't set/change any permission.

As far as Hbase configuration is concerned there are 2 path for set up: either 
you set up zookeeper within hbase-site.xml Or configure separately via zoo.cfg. 
If you are going with hbase setting for zookeeper then confirm following 
setting:
- hbase.zookeeper.property.clientPort.
- hbase.zookeeper.quorum.
- hbase.zookeeper.property.dataDir
Once you have right setting for these and set up root directory for hbase then 
there not much excercise is required.(Make sure zookeeper service is up before 
you start hbase)

I think if you follow above rules you should be fine. There is no issue because 
of long time shutdown or frequent machine restart. 

 Champ, moreover you need to have good amount of patience to understand the 
problem :) I do understand how frustating when you set up everything and next 
day you find the things are completely down. 

Sent from my BlackBerry, pls excuse typo

-----Original Message-----
From: Bing Li <[email protected]>
Date: Wed, 28 Mar 2012 14:32:12 
To: <[email protected]>; <[email protected]>
Reply-To: [email protected]
Subject: Re: Starting Abnormally After Shutting Down For Some Time

Jean-Daniel,

I changed dfs.data.dir and dfs.name.dir to new paths in the hdfs-site.xml.

I really cannot figure out why the HBase/Hadoop got a problem after a
couple of days of shutting down. If I use it frequently, no such a master
problem happens.

Each time, I have to reinstall not only HBase/Hadoop but also Ubuntu for
the problem. It wasted me a lot of time.

Thanks so much!

Bing



On Wed, Mar 28, 2012 at 4:46 AM, Jean-Daniel Cryans <[email protected]>wrote:

> Hi Bing,
>
> Two questions:
>
> - Can you look at the master log and see what's preventing the master
> from starting?
>
> - Did you change dfs.data.dir and dfs.name.dir in hdfs-site.xml? By
> default it writes to /tmp which can get cleaned up.
>
> J-D
>
> On Tue, Mar 27, 2012 at 12:52 PM, Bing Li <[email protected]> wrote:
> > Dear all,
> >
> > I got a weird problem when programming on the pseudo-distributed mode of
> > HBase/Hadoop.
> >
> > The HBase/Hadoop were installed correctly. It also ran well with my Java
> > code.
> >
> > However, if after shutting down the server for some time, for example,
> four
> > or five days, I noticed that HBase/Hadoop got a problem. I got an ERROR
> > when typing "status" in the shell of HBase.
> >
> >    ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7
> > times
> >
> > Such a problem had happened for three times in the three weeks.
> >
> > The HBase/Hadoop are installed on Ubuntu 10.
> >
> > Have you encountered such a problem? How to solve it?
> >
> > Thanks so much!
> >
> > Best regards,
> > Bing
>

Reply via email to