[
https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977559#comment-14977559
]
Hiroshi Ikeda commented on HBASE-14479:
---------------------------------------
I have an idea that a just simple scheduler can execute tasks within the almost
same thread in a low load with a queue for tasks, instead of preparing an
exclusive thread pool in RpcExecutor.
Pseudo code:
{code}
void RpcScheduler.dispatch(callRunner) {
queue.offer(callRunner);
if (threadsExecutingTasks < MAX_THREADS_EXECUTING_TASKS) {
threadExecutingTasks++;
while ((task = queue.poll()) != null) {
execute(task);
}
// In most cases in a low load, execute the one task you have added.
threadExecutingTasks--;
}
}
{code}
This is a based on the condition that we can borrow some threads from RpcServer
for a while.
In the actual code, I would use AtomicLong to manage the numbers of threads and
tasks.
> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>
> Key: HBASE-14479
> URL: https://issues.apache.org/jira/browse/HBASE-14479
> Project: HBase
> Issue Type: Improvement
> Components: IPC/RPC, Performance
> Reporter: Hiroshi Ikeda
> Assignee: Hiroshi Ikeda
> Priority: Minor
> Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch,
> HBASE-14479-V2.patch, HBASE-14479.patch, flamegraph-19152.svg,
> flamegraph-32667.svg, gc.png, gets.png, io.png, median.png
>
>
> {{RpcServer}} uses multiple selectors to read data for load distribution, but
> the distribution is just done by round-robin. It is uncertain, especially for
> long run, whether load is equally divided and resources are used without
> being wasted.
> Moreover, multiple selectors may cause excessive context switches which give
> priority to low latency (while we just add the requests to queues), and it is
> possible to reduce throughput of the whole server.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)