[ 
https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15360884#comment-15360884
 ] 

Hiroshi Ikeda commented on HBASE-14479:
---------------------------------------

RpcServer.Responder is sort of a safety net used when the native sending buffer 
of a socket is full, and that is rarely used if clients are well-behaved and 
wait their response for each request. That means, YCSB should call multiple 
requests simultaneously in one connection.

I checked the source of RpcServer and I found that the method 
Reader.doRead(SelectionKey) just does one request for each call, regardless of 
whether the next request is available, unless requests are through SASL. That 
makes the patch of this issue unnecessarily change registration of the key of a 
connection for each request, causing overhead (as shown by 
sun.nio.ch.EPollArrayWrapper::updateRegistrations, though I didn't think such 
different through-puts).

BTW, in order to resolve this, when we read as many requests from a connection 
as possible, the queue will easily become full and it will be difficult to 
handle requests fairly as to connections. I think it is better to cap the count 
of the requests simultaneously executing for each connection, according to the 
current requests queued (instead of using a fixed bounded queue).

> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>
>                 Key: HBASE-14479
>                 URL: https://issues.apache.org/jira/browse/HBASE-14479
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>            Reporter: Hiroshi Ikeda
>            Assignee: Hiroshi Ikeda
>            Priority: Minor
>         Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, 
> HBASE-14479-V2.patch, HBASE-14479.patch, flamegraph-19152.svg, 
> flamegraph-32667.svg, gc.png, gets.png, io.png, median.png
>
>
> {{RpcServer}} uses multiple selectors to read data for load distribution, but 
> the distribution is just done by round-robin. It is uncertain, especially for 
> long run, whether load is equally divided and resources are used without 
> being wasted.
> Moreover, multiple selectors may cause excessive context switches which give 
> priority to low latency (while we just add the requests to queues), and it is 
> possible to reduce throughput of the whole server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to