On Wed, Jun 13, 2012 at 8:40 AM, Amit Sela <[email protected]> wrote: > HBase configurations are: > hbase.regionserver.handler.count 18 > hbase.regionserver.global.memstore.upperLimit 0.5 > hbase.regionserver.global.memstore.lowerLimit 0.45 > hbase.server.thread.wakefrequency 500 > hbase.hregion.memstore.flush.size 268435456 (256MB) > hbase.hregion.memstore.block.multiplier 5 > hbase.hstore.blockingStoreFiles 12 > > As I understand things, when the heap usage of a Region Server reaches > the hbase.regionserver.global.memstore.upperLimit (5GB in this case), all > updates are blocked and all MemStores are flushed (until lowerLimit is > reached - 4.5GB in this case). >
Yes. Look in the logs for the blocking message. > During massive writes to HBase I see some of the Region Servers constantly > (not just spikes) over 7GB and spiking to 9 from time to time. > > Why is That ? is there anything wrong with the configurations I used ? > Is there a better way to control the Region Server memory usage ? > Why are you worried about it? You have already allocated the 10G to the RS. Like a gas, the JVM will tend to grow to occupy the allocated space. Also remember that CMS runs sloppy and can be slow cleaning up trash. High level, the heap is divided between the memstore, block cache, and miscellaneous (handler threads, flushers and compactors, etc.). If you need more detail, dump the heap and open in a profiler. A profiler that can sort by deep-sizes will help you zero in on the big objects and allow you walk up their allocation tree. There's only a few roots in hbase (see above for a basic list). St.Ack
