Xiaolin Ha created HBASE-27683:
----------------------------------

             Summary: Should support single call queue mode for RPC handlers 
while separating by request type
                 Key: HBASE-27683
                 URL: https://issues.apache.org/jira/browse/HBASE-27683
             Project: HBase
          Issue Type: Improvement
          Components: Performance, rpc
    Affects Versions: 2.5.3
            Reporter: Xiaolin Ha
            Assignee: Xiaolin Ha


Currently we not only seperate call queues by request type, e.g. read, write, 
scan, but also distinguish queues for handlers by the config 
`hbase.ipc.server.callqueue.handler.factor`, whose description is as follows,
{code:java}
Factor to determine the number of call queues.
  A value of 0 means a single queue shared between all the handlers.
  A value of 1 means that each handler has its own queue. {code}
But I think what we want is not only one queue for all the requests, or each 
handler has its own queue. We also want each request type has one queue.

Distinguish queues in the same type of requests will make some handlers too 
iddle but some handlers too busy under current balanced/random RPC executor 
framework. For the extrem case, each handler has its own queue, then if a large 
request comes for a handler, duing to he executor dispath calls without 
considering the queue size or the state of the handler, the afterwards coming 
requests will be queued until the handler complete the large slow request. 
While other handlers may process small requests quickly, but they can not help 
or grab calls from the busy queue, they must stay and wait it own queue jobs 
coming. Then we can see the queue time of some requests are long but there are 
iddle handlers.

We can also see these circumstances, that the queue time of calls is too larger 
than the process time, sometimes twice or more. Restarting the slow RS will 
make these problems disappear. 

By using single call queue for each request type, we can fully use the handler 
resources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to