[ 
https://issues.apache.org/jira/browse/HBASE-19359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16305911#comment-16305911
 ] 

stack commented on HBASE-19359:
-------------------------------

I like how you thinking [~zghaobac] So, leave it at 10 retries and change the 
backoff intervals?

On blocking file limit, the 90s pause is coarse. We should be able to do 
better. Telling client to back off would be way better.


> Revisit the default config of hbase client retries number
> ---------------------------------------------------------
>
>                 Key: HBASE-19359
>                 URL: https://issues.apache.org/jira/browse/HBASE-19359
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Guanghao Zhang
>            Assignee: Guanghao Zhang
>             Fix For: 2.0.0-beta-1
>
>         Attachments: HBASE-19359.master.001.patch, 
> HBASE-19359.master.001.patch, HBASE-19359.master.001.patch
>
>
> This should be sub-task of HBASE-19148. As the retries number effect too many 
> unit tests. So I open this issue to see the Hadoop QA result.
> The default value of hbase.client.retries.number is 35. Plan to reduce this 
> to 10.
> And for server side, the default hbase.client.serverside.retries.multiplier 
> is 10. So the server side retries number is 35 * 10 = 350. It is too big! 
> Plan to reduce hbase.client.serverside.retries.multiplier to 3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to