stack commented on HBASE-19976:

bq. This is a very typical dead lock problem in computer science.

Smile. We see it in many forms. Usual response is special channel to handle the 
'exception'. Then the number of exceptional behaviors builds up and then we up 
the number of 'meta' handlers to avoid deadlock in the meta handlers or we add 
a meta-meta handler.

I was wondering if you had a thread dump that showed all handlers occupied. I 
was thinking all threads blocked occupying procedures so the meta procedure was 
unable to run was an ugly situation. They should yield.

We have dedicated queues -- queues for server tasks, queues for table tasks -- 
and then within these notions of priority such that high priority are scheduled 
more frequently than low priority and server tasks before table tasks.  As long 
as Procedures yield, it should work out fine? You think we need to add a new 
priority dimension to the mix [~Apache9]?  The RecoverMetaProcedure is made up 
of multiple steps (log splitting, assign) and subprocedures. All would run in a 
single high-priority thread? Thanks.

> Dead lock if the worker threads in procedure executor are exhausted
> -------------------------------------------------------------------
>                 Key: HBASE-19976
>                 URL: https://issues.apache.org/jira/browse/HBASE-19976
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Duo Zhang
>            Assignee: stack
>            Priority: Critical
> See the comments in HBASE-19554. If all the worker threads are stuck in 
> AssignProcdure since meta region is offline, then the RecoverMetaProcedure 
> can not be executed and cause dead lock.

This message was sent by Atlassian JIRA

Reply via email to