Hi Vimal, It's for testing only, so you can put what ever value you want ;)
It might just take longer because HBase see that RS is down. But as I said, it's for testing. So it's not really critical. You might also want to play with GC options to make it faster. JM 2013/8/21 Vimal Jain <[email protected]> > Can someone please give advice on this ? > > > On Tue, Aug 20, 2013 at 12:41 PM, Vimal Jain <[email protected]> wrote: > > > Hi, > > I am running Hbase in pseudo distributed mode on top HDFS. > > Recently , i was facing problems related to long GC pauses. > > When i read in official documentation , its suggested to increase > > zookeeper timeout. > > > > I am planning to make it 10 minutes .I understand the risk of increasing > > timeout means it takes a while for master to assign new regionserver in > > case of a RS failure. > > But as i am running Pseudo distributed mode , there is only one RS and > > even if it goes down my entire system is down , so increasing timeout > does > > not seems to be an issue in my case.But still , i would like expert's > > advice on this. > > > > > > Here is my hbase-site.xml :- > > > > <configuration> > > * <property> > > <name>zookeeper.session.timeout</name> > > <value>600000</value> > > </property> > > <property> > > <name>hbase.zookeeper.property.tickTime</name> > > <value>30000</value> > > </property>* > > <property> > > <name>hbase.rootdir</name> > > <value>hdfs://192.168.20.30:9000/hbase</value> > > </property> > > <property> > > <name>hbase.cluster.distributed</name> > > <value>true</value> > > </property> > > <property> > > <name>hbase.zookeeper.quorum</name> > > <value>192.168.20.30</value> > > </property> > > <property> > > <name>dfs.replication</name> > > <value>1</value> > > </property> > > <property> > > <name>hbase.zookeeper.property.clientPort</name> > > <value>2181</value> > > </property> > > <property> > > <name>hbase.zookeeper.property.dataDir</name> > > <value>/home/hadoop/HbaseData/zookeeper</value> > > </property> > > </configuration> > > > > -- > > Thanks and Regards, > > Vimal Jain > > > > > > -- > Thanks and Regards, > Vimal Jain >
