[
https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999155#comment-14999155
]
Andrew Purtell commented on HBASE-12790:
----------------------------------------
Just throwing out ideas, considering existing interfaces:
We might also consider if Phoenix's overloading of scanner.next processing on
the server can help here. Scan parameters can be changed on the way in on the
server side (in the preXXX hooks) according to Phoenix's view of the arriving
requests.
A static adjustment for predictable performance could be enough.
However, there are interesting opportunities for making dynamic changes here:
As workload increases, perhaps measured by arrival rate or by an estimation of
query perf characteristics (like estimated cardinality), the amount of work by
time performed by each scanner.next iteration can be made smaller, providing
lower latency / better responsiveness when work is interleaved at the expense
of throughput. As workload decreases, the quanta can be increased, optimizing
for better throughput.
In any case it's really up to the coprocessor application what wants to do with
respect to rewriting scan parameters on the server. (And up to the client /
query planner how it wants to set up scan parameters in the first place.)
> Support fairness across parallelized scans
> ------------------------------------------
>
> Key: HBASE-12790
> URL: https://issues.apache.org/jira/browse/HBASE-12790
> Project: HBase
> Issue Type: New Feature
> Reporter: James Taylor
> Assignee: ramkrishna.s.vasudevan
> Labels: Phoenix
> Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch,
> HBASE-12790_1.patch, HBASE-12790_5.patch, HBASE-12790_callwrapper.patch,
> HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in
> getting back results. This can lead to starvation with a loaded cluster and
> interleaved scans, since the RPC queue will be ordered and processed on a
> FIFO basis. For example, if there are two clients, A & B that submit largish
> scans at the same time. Say each scan is broken down into 100 scans by the
> client (broken down into equal depth chunks along the row key), and the 100
> scans of client A are queued first, followed immediately by the 100 scans of
> client B. In this case, client B will be starved out of getting any results
> back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead
> of the standard FIFO queue. The queue to be used could be (maybe it already
> is) configurable based on a new config parameter. Using this queue would
> require the client to have the same identifier for all of the 100 parallel
> scans that represent a single logical scan from the clients point of view.
> With this information, the round robin queue would pick off a task from the
> queue in a round robin fashion (instead of a strictly FIFO manner) to prevent
> starvation over interleaved parallelized scans.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)