[
https://issues.apache.org/jira/browse/HBASE-19359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16305942#comment-16305942
]
Guanghao Zhang commented on HBASE-19359:
----------------------------------------
bq. So, leave it at 10 retries and change the backoff intervals?
15 is ok, sir :-). Let's keep the current backoff interval and use the 15 as
default retry number now. Because the real exponential backoff may be too long
for user. And another important thing is the backoff should work with operation
timeout. But now the operation timeout still confused (See HBASE-17449)... We
should have a clear rule about the timeout and retry: A operation will failed
unless reach the retry limit or reach the operation timeout limit. If we can
make sure this, I thought it is ok to change the default backoff to a "real
exponential backoff" and add some special retry backoff for special case. Even
the retry backoff is long for user, user still can control this by operation
timeout.
> Revisit the default config of hbase client retries number
> ---------------------------------------------------------
>
> Key: HBASE-19359
> URL: https://issues.apache.org/jira/browse/HBASE-19359
> Project: HBase
> Issue Type: Sub-task
> Reporter: Guanghao Zhang
> Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19359.master.001.patch,
> HBASE-19359.master.001.patch, HBASE-19359.master.001.patch
>
>
> This should be sub-task of HBASE-19148. As the retries number effect too many
> unit tests. So I open this issue to see the Hadoop QA result.
> The default value of hbase.client.retries.number is 35. Plan to reduce this
> to 10.
> And for server side, the default hbase.client.serverside.retries.multiplier
> is 10. So the server side retries number is 35 * 10 = 350. It is too big!
> Plan to reduce hbase.client.serverside.retries.multiplier to 3.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)