+1

J-D

On Sun, Jan 17, 2010 at 12:12 PM, Andrew Purtell <apurt...@apache.org> wrote:
>> We could add a metric that did an iteration of handlers in HBaseServer
>> emitting how many were in progress.  There is no provision for doing this
>> currently.  Would have to add accessors, etc.
>
> If already going into the code and changing things...
>
> Having a bounded thread pool is important, so availability can degrade
> gracefully (more or less), as opposed to the whole regionserver becoming
> livelocked. But, we don't need for the pool to be preallocated as is
> done currently. How about changing the RPC server thread pool such that
> the user can specify a minimum and maximum number of handler threads? The
> pool would start with the minimum, allocate more up to the max to handle
> additional concurrency, then terminate unused threads after some time
> back down to the minimum. Then we can do things like set a maximum of 100
> handlers or such without taking on the overhead of 100 threads until it
> is needed.
>
>   - Andy
>
>
>
> ----- Original Message ----
>> From: stack <st...@duboce.net>
>> To: hbase-dev@hadoop.apache.org
>> Sent: Sun, January 17, 2010 11:56:45 AM
>> Subject: Re: [jira] Resolved: (HBASE-2133) Increase default number of client 
>>  handlers
>>
>> We could add a metric that did an iteration of handlers in HBaseServer
>> emitting how many were in progress.  There is no provision for doing this
>> currently.  Would have to add accessors, etc.  Its a good idea.  As a
>> metric, we might miss a burst of requests filling all slots but sustained
>> high request numbers would show.
>>
>> St.Ack
>>
>> On Sun, Jan 17, 2010 at 10:17 AM, Lars George wrote:
>>
>> > Hi Andrew,
>> >
>> > I have it at 30. What I would like to know is how to detect this sort
>> > of shortage? I also have the feeling we need this sort of indicators
>> > as metrics to the various servers so that the max and current can be
>> > graphed. What do you think?
>> >
>> > Thanks,
>> > Lars
>> >
>> > On Sat, Jan 16, 2010 at 2:02 AM, Andrew Purtell (JIRA)
>> > wrote:
>> > >
>> > >     [
>> >
>> https://issues.apache.org/jira/browse/HBASE-2133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>> > >
>> > > Andrew Purtell resolved HBASE-2133.
>> > > -----------------------------------
>> > >
>> > >      Resolution: Fixed
>> > >    Hadoop Flags: [Reviewed]
>> > >
>> > > Committed following change to trunk and 0.20 branch:
>> > > \\
>> > > {noformat}
>> > > --- conf/hbase-default.xml      (revision 899849)
>> > > +++ conf/hbase-default.xml      (working copy)
>> > > @@ -153,10 +153,10 @@
>> > >
>> > >
>> > >    hbase.regionserver.handler.count
>> > > -    10
>> > > +    25
>> > >    Count of RPC Server instances spun up on RegionServers
>> > >     Same property is used by the HMaster for count of master handlers.
>> > > -    Default is 10.
>> > > +    Default is 25.
>> > >
>> > >
>> > >
>> > > {noformat}
>> > >
>> > > We might want to increase this again depending on user feedback. I know I
>> > needed 100 to avoid trouble with high read/write load once above ~200
>> > regions/server.
>> > >
>> > >> Increase default number of client handlers
>> > >> ------------------------------------------
>> > >>
>> > >>                 Key: HBASE-2133
>> > >>                 URL: https://issues.apache.org/jira/browse/HBASE-2133
>> > >>             Project: Hadoop HBase
>> > >>          Issue Type: Improvement
>> > >>            Reporter: Andrew Purtell
>> > >>            Assignee: Andrew Purtell
>> > >>             Fix For: 0.20.3, 0.21.0
>> > >>
>> > >>
>> > >> Any reason not to just go ahead and change hbase-default.xml to include:
>> > >> {noformat}
>> > >>
>> > >>    hbase.regionserver.handler.count
>> > >>    100
>> > >>
>> > >>
>> > >>    hbase.zookeeper.property.maxClientCnxns
>> > >>    100
>> > >>
>> > >> {noformat}
>> > >> ?
>> > >> The current default for both, 10, is anemic.
>> > >
>> > > --
>> > > This message is automatically generated by JIRA.
>> > > -
>> > > You can reply to this email to add a comment to the issue online.
>> > >
>> > >
>> >
>
>
>
>
>
>

Reply via email to