[
https://issues.apache.org/jira/browse/HBASE-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680741#comment-13680741
]
Jeffrey Zhong commented on HBASE-8729:
--------------------------------------
[[email protected]] Thanks for the good comments!
I'll address your first two comments in the next patch(Ted addressed the second
one already in the v2 patch). The interesting point is your last comment:
{quote}
Rather than a log replay handler, should we instead have M_SERVER_SHUTDOWN be
its own type... and then make N executor slots for server shutdown handling
rather than for log reaplay? Would then make the exit of server shutdown
handler nicer in that when we leave it, we have processed the server rather
than as we have in this patch where we go off to another executor for
completion?
{quote}
If we don't introduce the new log replay handler, setting N is tricky and its
value has to be big enough so that we won't end up in issue of the JIRA.
The other alternative(not clean and error prone) is using one pool while
limiting logReplay can use up to MaxThreads - 3 slots in order not to block all
threads in the pool. How do you think? Thanks.
> distributedLogReplay may hang during chained region server failure
> ------------------------------------------------------------------
>
> Key: HBASE-8729
> URL: https://issues.apache.org/jira/browse/HBASE-8729
> Project: HBase
> Issue Type: Bug
> Components: MTTR
> Reporter: Jeffrey Zhong
> Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8729-v2.patch, hbase-8729.patch
>
>
> In a test, half cluster(in terms of region servers) was down and some log
> replay had incurred chained RS failures(receiving RS of a log replay failed
> again).
> Since by default, we only allow 3 concurrent SSH handlers(controlled by
> {code}this.executorService.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS,conf.getInt("hbase.master.executor.serverops.threads",
> 3));{code}).
> If all 3 SSH handlers are doing logReplay(blocking call) and one of receiving
> RS fails again then logReplay will hang because regions of the newly failed
> RS can't be re-assigned to another live RS(no ssh handler will be processed
> due to max threads setting) and existing log replay will keep routing replay
> traffic to the dead RS.
> The fix is to submit logReplay work into a separate type of executor queue in
> order not to block SSH region assignment so that logReplay can route traffic
> to a live RS after retries and move forward.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira