[
https://issues.apache.org/jira/browse/HBASE-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15406703#comment-15406703
]
Gary Helmling commented on HBASE-16138:
---------------------------------------
Yeah, I don't see a deadlock issue here either, unless I'm missing something.
The calls to the replication table on region open will be triggered from
OpenRegionHandler in the RS executor service, so it won't be blocking handler
threads, even for the duration of a single request timeout.
There may be further cleanup to make the assignment more streamlined, but I
think this kind of convergent behavior is much more resilient than trying to
build further dependency ordering into the system.
> Cannot open regions after non-graceful shutdown due to deadlock with
> Replication Table
> --------------------------------------------------------------------------------------
>
> Key: HBASE-16138
> URL: https://issues.apache.org/jira/browse/HBASE-16138
> Project: HBase
> Issue Type: Sub-task
> Components: Replication
> Reporter: Joseph
> Assignee: Joseph
> Priority: Critical
> Attachments: HBASE-16138.patch
>
>
> If we shutdown an entire HBase cluster and attempt to start it back up, we
> have to run the WAL pre-log roll that occurs before opening up a region. Yet
> this pre-log roll must record the new WAL inside of ReplicationQueues. This
> method call ends up blocking on
> TableBasedReplicationQueues.getOrBlockOnReplicationTable(), because the
> Replication Table is not up yet. And we cannot assign the Replication Table
> because we cannot open any regions. This ends up deadlocking the entire
> cluster whenever we lose Replication Table availability.
> There are a few options that we can do, but none of them seem very good:
> 1. Depend on Zookeeper-based Replication until the Replication Table becomes
> available
> 2. Have a separate WAL for System Tables that does not perform any
> replication (see discussion at HBASE-14623)
> Or just have a seperate WAL for non-replicated vs replicated
> regions
> 3. Record the WAL log in the ReplicationQueue asynchronously (don't block
> opening a region on this event), which could lead to inconsistent Replication
> state
> The stacktrace:
>
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.recordLog(ReplicationSourceManager.java:376)
>
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.preLogRoll(ReplicationSourceManager.java:348)
>
> org.apache.hadoop.hbase.replication.regionserver.Replication.preLogRoll(Replication.java:370)
>
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.tellListenersAboutPreLogRoll(FSHLog.java:637)
>
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:701)
>
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:600)
>
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:533)
>
> org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:132)
>
> org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:186)
>
> org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:197)
> org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:240)
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1883)
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
>
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> java.lang.Thread.run(Thread.java:745)
> Does anyone have any suggestions/ideas/feedback?
> Attached a review board at: https://reviews.apache.org/r/50546/
> It is still pretty rough, would just like some feedback on it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)