Hi i tried the following parameters also export HBASE_REGIONSERVER_OPTS="-Xmx2g -Xms2g -Xmn256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:$HBASE_HOME/logs/gc-$(hostname)-hbase.log"
hbase.regionserver.global.memstore.upper Limit .50 hbase.regionserver.global.memstore.lower Limit .50 hbase.regionserver.handler.count 30 but still not much effect. any suggestions , on how to improve the ingestion speed ?? On Fri, Mar 22, 2013 at 9:04 PM, tarang dawer <[email protected]>wrote: > 3 region servers 2 region servers having 5 regions each , 1 having 6 > +2(meta and root) > 1 CF > set HBASE_HEAPSIZE in hbase-env.sh as 4gb . > > is the flush size okay ? or do i need to reduce/increase it ? > > i'll look into the flushQ and compactionQ size and get back to you . > > do these parameters seem okay to you ? if something seems odd / not in > order , please do tell > > Thanks > Tarang Dawer > > > On Fri, Mar 22, 2013 at 8:21 PM, Anoop John <[email protected]> wrote: > >> How many regions per RS? And CF in table? >> What is the -Xmx for RS process? You will bget 35% of that memory for all >> the memstores in the RS. >> hbase.hregion.memstore.flush.size = 1GB!! >> >> Can you closely observe the flushQ size and compactionQ size? You may be >> getting so many small file flushes(Due to global heap pressure) and >> subsequently many minor compactions. >> >> -Anoop- >> >> On Fri, Mar 22, 2013 at 8:14 PM, tarang dawer <[email protected] >> >wrote: >> >> > Hi >> > As per my use case , I have to write around 100gb data , with a >> ingestion >> > speed of around 200 mbps. While writing , i am getting a performance >> hit by >> > compaction , which adds to the delay. >> > I am using a 8 core machine with 16 gb RAM available., 2 Tb hdd 7200RPM. >> > Got some idea from the archives and tried pre splitting the regions , >> > configured HBase with following parameters(configured the parameters in >> a >> > haste , so please guide me if anything's out of order) :- >> > >> > >> > <property> >> > <name>hbase.hregion.memstore.block.multiplier</name> >> > <value>4</value> >> > </property> >> > <property> >> > <name>hbase.hregion.memstore.flush.size</name> >> > <value>1073741824</value> >> > </property> >> > >> > <property> >> > <name>hbase.hregion.max.filesize</name> >> > <value>1073741824</value> >> > </property> >> > <property> >> > <name>hbase.hstore.compactionThreshold</name> >> > <value>5</value> >> > </property> >> > <property> >> > <name>hbase.hregion.majorcompaction</name> >> > <value>0</value> >> > </property> >> > <property> >> > <name>hbase.hstore.blockingWaitTime</name> >> > <value>30000</value> >> > </property> >> > <property> >> > <name>hbase.hstore.blockingStoreFiles</name> >> > <value>200</value> >> > </property> >> > >> > <property> >> > <name>hbase.regionserver.lease.period</name> >> > <value>3000000</value> >> > </property> >> > >> > >> > but still m not able to achieve the optimal rate , getting around 110 >> mbps. >> > Need some optimizations ,so please could you help out ? >> > >> > Thanks >> > Tarang Dawer >> > >> > >> > >> > >> > >> > On Fri, Mar 22, 2013 at 6:05 PM, Jean-Marc Spaggiari < >> > [email protected]> wrote: >> > >> > > Hi Tarang, >> > > >> > > I will recommand you to take a look at the list archives first to see >> > > all the discussions related to compaction. You will found many >> > > interesting hints and tips. >> > > >> > > >> > > >> > >> http://search-hadoop.com/?q=compactions&fc_project=HBase&fc_type=mail+_hash_+user >> > > >> > > After that, you will need to provide more details regarding how you >> > > are using HBase and how the compaction is impacting you. >> > > >> > > JM >> > > >> > > 2013/3/22 tarang dawer <[email protected]>: >> > > > Hi >> > > > I am using HBase 0.94.2 currently. While using it , its write >> > > performance, >> > > > due to compaction is being affeced by compaction. >> > > > Please could you suggest some quick tips in relation to how to deal >> > with >> > > it >> > > > ? >> > > > >> > > > Thanks >> > > > Tarang Dawer >> > > >> > >> > >
