Lars, 

Sigh. 

Yes, configuring your timeouts correctly is important.  
Time is very important in distributed systems. 

Yet, there are some applications which require a faster time out than others. 
So, you tune some of the timers to have a fast fail, and you end up causing 
unintended problems for others. 

The simplest solution is to use threads in you client app. (Of course this 
assumes that you’re capable of writing clean multi-threaded code and I don’t 
want to assume anything.) 

Remember that HBase is a shared resource. So you need to consider the whole at 
the same time you consider the needs of one. 

Of course there can be unintended consequences if an application suddenly 
starts to drop connections before a result or timeout occurs too.  ;-) 


> On Jun 16, 2015, at 12:13 AM, lars hofhansl <[email protected]> wrote:
> 
> Please always tell us which version of HBase you are using. We have fixed a 
> lot of issues in this area over time.Here's an _old_ blog post I wrote about 
> this: http://hadoop-hbase.blogspot.com/2012/09/hbase-client-timeouts.html
> 
> Using yet more threads to monitor timeouts of another thread is a bad idea, 
> especially when the timeout is configurable in the first place.
> 
> -- Lars
>      From: mukund murrali <[email protected]>
> To: [email protected] 
> Sent: Sunday, June 14, 2015 10:22 PM
> Subject: Re: How to make the client fast fail
> 
> It would be great if there is a single timeout configuration from the
> client end. All other parameters should fine tune based on that one
> parameter. We have modified simple based on trail basis to suit our need.
> Also not sure what side effect it would cause configuring those parameters.
> 
> 
> 
> On Mon, Jun 15, 2015 at 10:38 AM, <[email protected]> wrote:
> 
>> We are also interested on the solution for this. With
>> hbase.client.retries.number = 7 and client.pause=400ms, it came down to
>> ~9mins (from 20 mins). Now we are thinking the 9mins is also a big number.
>> 
>> Thanks,
>> Hari
>> 
>> -----Original Message-----
>> From: PRANEESH KUMAR [mailto:[email protected]]
>> Sent: Monday, June 15, 2015 10:33 AM
>> To: [email protected]
>> Subject: Re: How to make the client fast fail
>> 
>> Hi Michael,
>> 
>> We can have a monitoring thread and interrupt the hbase client thread
>> after time out instead of doing this I want the timeout or some exception
>> to be thrown from the HBase client itself.
>> 
>> On Thu, Jun 11, 2015 at 5:16 AM, Michael Segel
>> wrote:
>> 
>>> threads?
>>> 
>>> So that regardless of your hadoop settings, if you want something
>>> faster, you can use one thread for a timer and then the request is in
>>> another. So if you hit your timeout before you get a response, you can
>> stop your thread.
>>> (YMMV depending on side effects... )
>>> 
>>>> On Jun 10, 2015, at 12:55 AM, PRANEESH KUMAR
>>>> 
>>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I have got the Connection object with default configuration, if the
>>>> zookeeper or HMaster or Region server is down, the client didn't
>>>> fast
>>> fail
>>>> and it took almost 20 mins to thrown an error.
>>>> 
>>>> What is the best configuration to make the client fast fail.
>>>> 
>>>> Also what is significance of changing the following parameters.
>>>> 
>>>> hbase.client.retries.number
>>>> zookeeper.recovery.retry
>>>> zookeeper.session.timeout
>>>> zookeeper.recovery.retry.intervalmill
>>>> hbase.rpc.timeout
>>>> 
>>>> Regards,
>>>> Praneesh
>>> 
>>> 
>> 
> 
> 

Reply via email to