Hi all,

I was running a test on our local hbase cluster (1 master node, 4 region
servers) and I ran into some OutOfMemory exceptions. Basically, one of the
region servers went down first, then the master node followed (ouch!) as I
was inserting the data for the test.

I was still using the default heap size and I would like to get some
recommendations as to what I should raise it to. My regionservers each have
4GB and the master node has 8GB. It may be useful if I describe the tests
that I was trying to do, so here goes:

The tests were to ramp up the amount of rows to determine the query latency
of my particular usage pattern. Each level of testing has a different number
of rows (1K, 10K and 100K). My exception occurred on the 10K row data
population (about 3300 rows in).

My data is a table with a single column family with 10K column instances per
row. Each column contains approx 500-1000 bytes of data.

I should note that the first level of testing with 1K rows were returning
average query responses of approx 240ms.

Could someone please advise on how large you think I should set my heap
space (and if you think I should make any mods to hadoop heap as well).

Thanks,
Daniel

Reply via email to