Genady,

Answers inline.

J-D

On Wed, Jan 28, 2009 at 10:33 AM, Genady <[email protected]> wrote:

>  Thanks for your quick response Jean-Daniel,
>
>
>
> By setting a bigger heap size you've probably meant a Hadoop heap size, the
> problem is that it start to cause an original problem, first Hadoop heap was
> 2Gb and it caused
>
Try 1.5GB


> "java.lang.OutOfMemoryError: unable to create new native 
> thread<http://www.egilh.com/blog/archive/2006/06/09/2811.aspx>",
> probably as result of increasing xceivers thread number,  so I set number
> Hadoop heap to 1Gb and xceivers to 3400, but on some level region servers
> are stop to respond again, now it's 481 regions on the each region server(
> assuming that each server contains a different regions).
>

You have 481 regions on each region server?


> I'll try to increase a max file size, by the way it will have an influence
> on the new regions or only old one?
>

Only the new ones, unless you recreate your table.


>
>
> By the way is there some limitation of regions for server to handle? ( CPU
> 2Hz x4, RAM 8GB).
>

Depends on your schema, with 2 families it's easy to handle but the low
number of nodes makes it that your region servers are overloaded. Our load
average is more like 80-100 regions per node but we have 15 of them.


>
>
> Regarding Hbase schema, we have only one table that have a two column
> families.
>
>
>
> Thanks again,
>
>
>
> Gennady
>
>
>
>
>
> *-----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of
> Jean-Daniel Cryans
> Sent: Wednesday, January 28, 2009 4:02 PM
> To: [email protected]
> Subject: Re: Hbase 0.19 failed to start: exceeds the limit of concurrent
> xcievers 3000*
>
> * *
>
> *Genady,*
>
> * *
>
> *Some comments.*
>
> * *
>
> *Try a bigger heap size, something like 2GB.*
>
> * *
>
> *Set the handler count to 4, that thing eats a lot of memory.*
>
> * *
>
> *430 regions for 3 nodes is really a lot and HBase currently opens a lot
> of*
>
> *files. Try increasing the max file size of your tables so that it takes*
>
> *longer to split and therefore have less regions. Search the list on how
> to*
>
> *do that.*
>
> * *
>
> *Also do you happen to have a lot of families in your tables?*
>
> * *
>
> *Thx,*
>
> * *
>
> *J-D*
>
> * *
>
> *On Wed, Jan 28, 2009 at 8:53 AM, Genady <[email protected]> wrote:*
>
> * *
>
> *> Hi,*
>
> *> *
>
> *> *
>
> *> *
>
> *> It seems that HBase 0.19 on Hadoop 0.19 fail to start because of
> exceeding*
>
> *> limit of concurrent xceivers( in hadoop datanode logs), which is
> currently*
>
> *> 3000, setting more than 3000 xceivers is causing JVM out of memory*
>
> *> exception, is there is something wrong with configuration parameters of
> *
>
> *> cluster( three nodes, 430 regions,Hadoop heap size is default - 1GB)?*
>
> *> Additional parameters in hbase configuration are:*
>
> *> *
>
> *> dfs.datanode.handler.count = 6,*
>
> *> *
>
> *> dfs.datanode.socket.write.timeout=0*
>
> *> *
>
> *> *
>
> *> *
>
> *> java.io.IOException: xceiverCount 3001 exceeds the limit of concurrent*
>
> *> xcievers 3000*
>
> *> *
>
> *>        at*
>
> *>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:87)
> *
>
> *> *
>
> *>        at java.lang.Thread.run(Thread.java:619)*
>
> *> *
>
> *> *
>
> *> *
>
> *> Any help is very appreciated,*
>
> *> *
>
> *> Genady*
>
> *> *
>
> *> *
>
> *> *
>
> *> *
>

Reply via email to