Dear all,
I found some configuration information was saved in /tmp in my system. So
when some of the information is lost, the HBase cannot be started normally.
But in my system, I have tried to change the HDFS directory to another
location. Why are there still some files under /tmp?
To change the HDFS directory, the hdfs-site.xml is updated as follows. What
else should I do for moving all the configurations out of /tmp?
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/libing/GreatFreeLabs/Hadoop/FS</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>${hadoop.tmp.dir}/dfs/name/</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>${hadoop.tmp.dir}/dfs/data/</value>
</property>
</configuration>
Thanks so much!
Best,
Bing
On Wed, Mar 28, 2012 at 4:24 PM, Bing Li <[email protected]> wrote:
> Dear Manish,
>
> I appreciate so much for your replies!
>
> The system tmp directory is changed to anther location in my hdfs-site.xml.
>
> If I ran $HADOOP_HOME/bin/start-all.sh, all of the services were listed,
> including job tracker and task tracker.
>
> 10211 SecondaryNameNode
> 10634 Jps
> 9992 DataNode
> 10508 TaskTracker
> 10312 JobTracker
> 9797 NameNode
>
> In the job tracker's log, one exception was found.
>
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/libing/GreatFreeLab
> s/Hadoop/FS/mapred/system. Name node is in safe mode.
>
> In my system, I didn't see the directory, ~/mapred. How should I configure
> for it?
>
> For the properties you listed, they were not set in my system. Are they
> required? Since they have default values (
> http://hbase.apache.org/docs/r0.20.6/hbase-conf.html), do I need to
> update them?
>
> - hbase.zookeeper.property.clientPort.
> - hbase.zookeeper.quorum.
> - hbase.zookeeper.property.dataDir
>
> Now the system was reinstalled. At least, the pseudo-distributed mode runs
> well. I also tried to shut down the ubuntu machine and started it again.
> The system worked fine. But I worried the master-related problem must
> happen if the machine was shutdown for more time. I really don't understand
> the reason.
>
> Thanks so much!
>
> Best,
> Bing
>
> On Wed, Mar 28, 2012 at 3:11 PM, Manish Bhoge
> <[email protected]>wrote:
>
>> Bing,
>>
>> As per my experience on the configuration I can list down some points one
>> of which may be your solution.
>>
>> - first and foremost don't store your service metadata into system tmp
>> directory because it may get cleaned up in every start and you loose all
>> your job tracker, datanode information. It is as good as you're formatting
>> your namenode.
>> - if you're using CDH make sure you set up permission perfectly for root,
>> dfs data directory and mapred directories.(Refer CDH documentation)
>> - I didn't see job tracker in your service list. It should be up and
>> running. Check the job tracker log if there is any permission issue when
>> starting job tracker and task tracker.
>> - before trying your stuff on Hbase set up make sure all your Hadoop
>> services are up and running. You can check that by running a sample program
>> and check whether job tracker, task tracker responding for your
>> mapred.system and mapred.local directories to create intermediate files.
>> - once you have all hadoop services up don't set/change any permission.
>>
>> As far as Hbase configuration is concerned there are 2 path for set up:
>> either you set up zookeeper within hbase-site.xml Or configure separately
>> via zoo.cfg. If you are going with hbase setting for zookeeper then confirm
>> following setting:
>> - hbase.zookeeper.property.clientPort.
>> - hbase.zookeeper.quorum.
>> - hbase.zookeeper.property.dataDir
>> Once you have right setting for these and set up root directory for hbase
>> then there not much excercise is required.(Make sure zookeeper service is
>> up before you start hbase)
>>
>> I think if you follow above rules you should be fine. There is no issue
>> because of long time shutdown or frequent machine restart.
>>
>> Champ, moreover you need to have good amount of patience to understand
>> the problem :) I do understand how frustating when you set up everything
>> and next day you find the things are completely down.
>>
>> Sent from my BlackBerry, pls excuse typo
>>
>> -----Original Message-----
>> From: Bing Li <[email protected]>
>> Date: Wed, 28 Mar 2012 14:32:12
>> To: <[email protected]>; <[email protected]>
>> Reply-To: [email protected]
>> Subject: Re: Starting Abnormally After Shutting Down For Some Time
>>
>> Jean-Daniel,
>>
>> I changed dfs.data.dir and dfs.name.dir to new paths in the hdfs-site.xml.
>>
>> I really cannot figure out why the HBase/Hadoop got a problem after a
>> couple of days of shutting down. If I use it frequently, no such a master
>> problem happens.
>>
>> Each time, I have to reinstall not only HBase/Hadoop but also Ubuntu for
>> the problem. It wasted me a lot of time.
>>
>> Thanks so much!
>>
>> Bing
>>
>>
>>
>> On Wed, Mar 28, 2012 at 4:46 AM, Jean-Daniel Cryans <[email protected]
>> >wrote:
>>
>> > Hi Bing,
>> >
>> > Two questions:
>> >
>> > - Can you look at the master log and see what's preventing the master
>> > from starting?
>> >
>> > - Did you change dfs.data.dir and dfs.name.dir in hdfs-site.xml? By
>> > default it writes to /tmp which can get cleaned up.
>> >
>> > J-D
>> >
>> > On Tue, Mar 27, 2012 at 12:52 PM, Bing Li <[email protected]> wrote:
>> > > Dear all,
>> > >
>> > > I got a weird problem when programming on the pseudo-distributed mode
>> of
>> > > HBase/Hadoop.
>> > >
>> > > The HBase/Hadoop were installed correctly. It also ran well with my
>> Java
>> > > code.
>> > >
>> > > However, if after shutting down the server for some time, for example,
>> > four
>> > > or five days, I noticed that HBase/Hadoop got a problem. I got an
>> ERROR
>> > > when typing "status" in the shell of HBase.
>> > >
>> > > ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7
>> > > times
>> > >
>> > > Such a problem had happened for three times in the three weeks.
>> > >
>> > > The HBase/Hadoop are installed on Ubuntu 10.
>> > >
>> > > Have you encountered such a problem? How to solve it?
>> > >
>> > > Thanks so much!
>> > >
>> > > Best regards,
>> > > Bing
>> >
>>
>>
>