Sub 1GB heaps are not useful for anything but cursory functional testing with a 
few rows. It does not give the GC enough leeway to deal with the per RPC 
garbage that HBase produces.
But this does not translate to similar behavior with more data and larger heaps.

If I can plug my own blog post here...


Here's a note I wrote I about GC tuning for HBase: 
http://hadoop-hbase.blogspot.com/2014/03/hbase-gc-tuning-observations.html
And here's on region server sizing: 
http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html
 
-- Lars


________________________________
From: Mark Tse <[email protected]>
To: "[email protected]" <[email protected]> 
Sent: Friday, March 6, 2015 12:20 PM
Subject: RegionServer - Insufficient Memory and Cascading Errors


Hi everyone,

When I do a scan on a table with about 700 rows (about 50 columns each), the 
RegionServers will systematically go offline one at a time until all the 
RegionServers are offline. This is probably due to there not being enough 
memory available for the RegionServer processes (we are working with sub-1G for 
our max heap size on our test clusters atm).

Increasing the max heap size for the RegionServers alleviates this problem. 
However, my concern is that this kind of cascading failure occurs on production 
with large datasets even with a larger heap size.

What steps can I take to prevent this kind of cascading error? Is there a way 
to configure RegionServers to return an error instead of just failing (and 
causing HBase Master to hand the task to the next available RegionServer)?

Thanks,
Mark

Reply via email to