[ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151574#comment-17151574
 ] 

Duo Zhang commented on HBASE-24526:
-----------------------------------

In general there are two problems.

1. The updating meta is a blocking operation, which will cause the PEWorker to 
wait. So if meta is temporarily unavailable, it is possible that all PEWokers 
are stuck and we can not make any progress for a long time. This can not be 
fully solved by checking meta availability before executing the procedure, as 
we need sometime to find out the meta is unavailable, and the state of meta is 
reset by TRSP, which is also executed by a PEWorker, which could make things 
worse. You can see HBASE-19976 and its sub tasks on how we plan to partially 
solve it by introducing priority for procedures and also add more workers if 
all workers are stuck.  There is a UT called TestProcedurePriority which is to 
confirm that the mechanism can finally make progress.
IIRC there is still a problem that, we always poll SCP priority to TRSP, so if 
a bunch of servers crash at the same time, we have to finish the all the wal 
splitting before we actually assigning meta...
In general, on master branch, since we have builtin async connection support 
for master service, we could introduce a sub procedure for updating meta in 
TRSP and SCP, so the parent procedure will release the PEWorker, and in the 
update meta procedure, we could make use of the async connection to release the 
PEWorker when updating meta. On branch-2.x, this is not possible, but I think a 
possible way, is to reduce the retry count and timeout to release the PEWorker 
soon so we have a chance to schedule the procedure with highest priority.

2. All meta updating operations are under the region node lock, together with 
modification to the in memory state, which implies that, the in memory state of 
a region is always the same with the record in meta. This is a very strong 
assumption which makes the logic much easier to implement, but also makes the 
optimization in #1 impossible, as when meta updating fails, if we still want to 
keep the state in sync, the only way is to abort... Maybe in some places we 
have retry on meta updating failure, but I'm not sure whether the logic is 
correct. We should revisit the related code to remove this assumption, so we 
can apply the optimization in #1.

Thanks.

> Deadlock executing assign meta procedure
> ----------------------------------------
>
>                 Key: HBASE-24526
>                 URL: https://issues.apache.org/jira/browse/HBASE-24526
>             Project: HBase
>          Issue Type: Bug
>          Components: proc-v2, Region Assignment
>    Affects Versions: 2.3.0
>            Reporter: Nick Dimiduk
>            Priority: Critical
>
> I have what appears to be a deadlock while assigning meta. During recovery, 
> master creates the assign procedure for meta, and immediately marks meta as 
> assigned in zookeeper. It then creates the subprocedure to open meta on the 
> target region. However, the PEWorker pool is full of procedures that are 
> stuck, I think because their calls to update meta are going nowhere. For what 
> it's worth, the balancer is running concurrently, and has calculated a plan 
> size of 41.
> From the master log,
> {noformat}
> 2020-06-06 00:34:07,314 INFO 
> org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: 
> Starting pid=17802, ppid=17801, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
> TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
> state=OPEN, location=null; forceNewPlan=true, retain=false
> 2020-06-06 00:34:07,465 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> hbasedn139.example.com,16020,1591403576247
> 2020-06-06 00:34:07,466 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> {noformat}
> {{pid=17803}} is not mentioned again. hbasedn139 never receives an 
> {{openRegion}} RPC.
> Meanwhile, additional procedures are scheduled and picked up by workers, each 
> getting "stuck". I see log lines for all 16 PEWorker threads, saying that 
> they are stuck.
> {noformat}
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
> for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
> TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
> region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 
> updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, 
> regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956
> ...
> 2020-06-06 00:34:22,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 14.3340 sec
> ...
> 2020-06-06 00:34:27,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 19.3340 sec
> ...
> {noformat}
> The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
> minutes. Eventually master starts logging
> {noformat}
> 2020-06-06 00:50:18,033 INFO 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, 
> tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue 
> is full on hbasedn139.example.com,16020,1591403576247, too many items queued 
> ?, details=row 
> 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.'
>  on table 'hbase:meta' at region=hbase:meta,,1.
> 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, 
> see https://s.apache.org/timeout
> {noformat}
> The master never recovers on its own.
> I'm not sure how common this condition might be. This popped after about 20 
> total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to