This is an experimental system right now and I will bump up the # of servers in production. But here are the specs:
6 regionservers (64GB RAM, 48GB allocated to HBase heap, some more allocated to Datanode and other processes) 50-55 regions per server workload: 25K gets / second, 25K puts / second (puts is not consistently that high but I am quoting the highest number) handler count: 2.5K per regionserver The data size is pretty compact (think TSDB style) and it should fit in memory (in test environment). Yet I see long pauses when doing GETs. I feel those pauses happen when all the regionserver handles are servicing RPC requests and it makes sense. I can experiment scaling out the cluster, but before doing that I want to bump the region handler count and see how far I can stretch it. But it seems I can't go beyond 5K right now. Thanks, Viral On Sun, Apr 28, 2013 at 3:19 PM, Ted Yu <[email protected]> wrote: > bq. the setting is per regionserver (as the name suggests) and not per > region right ? > > That is correct. > > Can you give us more information about your cluster size, workload, etc ? > > Thanks > > On Mon, Apr 29, 2013 at 4:30 AM, Viral Bajaria <[email protected] > >wrote: > > > Hi, > > > > I have been trying to play around with the regionserver handler count. > What > > I noticed was, the cluster comes up fine up to a certain point, ~7500 > > regionserver handler counts. But above that the system refuses to start > up. > > They keep on spinning for a certain point. The ROOT region keeps on > > bouncing around different states but never stabilizes. > > > > So the first question, what's the max that folks on the list have gone > with > > this settings ? If anyone has gone above 10,000 have you done any special > > settings ? > > > > Secondly, the setting is per regionserver (as the name suggests) and not > > per region right ? > > > > Following are my versions: > > HBase 0.94.5 > > Hadoop 1.0.4 > > Ubuntu 12.04 > > > > Let me know if you need any more information from my side. > > > > Thanks, > > Viral > > >
