[ 
https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997058#comment-14997058
 ] 

Andrew Purtell commented on HBASE-12790:
----------------------------------------

On complexity.

We've added a lot of knobs and extension surfaces over the years rather than 
make architectural decisions that would tame complexity at the expense of 
addressing some use cases.

Perhaps one of the worst offenses in that regard is coprocessors. I'm saying 
that even as the person who designed them. (smile) Now of course as a means for 
mixin platform extensions they've been really successful, and have enabled even 
something like Phoenix, which is a wild success. At the same time, when 
thinking about sources of complexity, the coprocessor API is right up there 
because by it's nature it will leak internal implementation detail all over the 
place. We hope coprocessor applications will treat internal data types as 
opaque but can't enforce that. The potential for abuse is acute. I will refrain 
from more than the briefest mention of local indexing.

Moreover having internal extensions invites apps like Phoenix who want to, of 
course, make good use of other HBase internals, since they are available, 
leading to additional sources of abstraction leakage. On some level this is 
expected and ok. A risk we always have to face, though, is once we have 
external users of an interface we are locked into supporting its semantics 
as-is, or at least to providing a path to upgrade, leading to a backwards 
compatible code path for every iteration on semantics, even the stuff that 
leaked which shouldn't have. A good example of this latter phenomenon IMHO is 
pluggable RPC scheduling as it is today.

I'm not fond of the idea of applications plugging in RPC schedulers, as they 
are currently designed. This part of the code was meant to be private, but was 
promoted to LP once Phoenix extended it for indexing. I think we can debate if 
this was the right choice. I think it was a reasonable decision at the time and 
won't relitigate it, mainly because I had a big hand in it (smile). However 
someone with a critical perspective could call it an expedient tactical 
decision leaving behind an architectural smell, and they would have a point. 
RPC schedulers most unfortunately must specify some hard coded details on 
executor types and queue types. This will be a problem because third party 
scheduler implementations will not have the same velocity as HBase core as 
executor types and queue types change and maybe the whole area of scheduling is 
refactored. This design problem wasn't considered back when it wasn't expected 
third parties would plug in schedulers. Now, we'll have to live with it 
somehow. 

In that spirit let us turn and consider the current patch here and its 
approach. We are doubling down on leaking internal RPC scheduling 
implementation minutiae to third parties. Tagging RPC requests with a "group 
ID". What is a group ID? Not discussed or documented. How is it used? Not 
discussed or documented, but we can look at the code. When we dig in, only 
scans are tagged. WTF? What about the other RPC types? What is the objective? A 
clean design rationalized across all HBase operation types? No, it's not that. 
If we accept this patch into our RPC we must support it "forever". Not everyone 
thinks that is a good idea. One thing we can all agree on about this patch, if 
accepted as is it will be another expedient tactical decision leaving behind 
another architectural smell.

We may simply need to reset this whole conversation and start over with a 
design discussion. What is the fundamental need? How can we address it in a way 
this developer community as a whole feels comfortable supporting going forward? 
Reviewing this JIRA from top to bottom, it looks to me like we had a problem 
specification, followed immediately by a tactical patch. We skipped over design 
discussion and therefore have reached an impasse.

> Support fairness across parallelized scans
> ------------------------------------------
>
>                 Key: HBASE-12790
>                 URL: https://issues.apache.org/jira/browse/HBASE-12790
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: James Taylor
>            Assignee: ramkrishna.s.vasudevan
>              Labels: Phoenix
>         Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, 
> HBASE-12790_1.patch, HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, 
> HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in 
> getting back results. This can lead to starvation with a loaded cluster and 
> interleaved scans, since the RPC queue will be ordered and processed on a 
> FIFO basis. For example, if there are two clients, A & B that submit largish 
> scans at the same time. Say each scan is broken down into 100 scans by the 
> client (broken down into equal depth chunks along the row key), and the 100 
> scans of client A are queued first, followed immediately by the 100 scans of 
> client B. In this case, client B will be starved out of getting any results 
> back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead 
> of the standard FIFO queue. The queue to be used could be (maybe it already 
> is) configurable based on a new config parameter. Using this queue would 
> require the client to have the same identifier for all of the 100 parallel 
> scans that represent a single logical scan from the clients point of view. 
> With this information, the round robin queue would pick off a task from the 
> queue in a round robin fashion (instead of a strictly FIFO manner) to prevent 
> starvation over interleaved parallelized scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to