[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14005780#comment-14005780
 ] 

Mikhail Antonov commented on HBASE-10070:
-----------------------------------------

[~devaraj],  [~stack],  [~enis]

Yeah, the reason I brought it up is that unlike changes for example in LB, this 
is public (yet evolving) API, so just wanted to double-check that we don't 
expose to client code details which would limit us later.

bq. Even within a given Consistency model, you may want different execution 
strategies I think (like for TIMELINE consistency, parallel and parallel with 
delay, or go to first replica, then second, then third, etc). In the committed 
code in branch, the consistency model implies hard coded execution model.
Sure, any consistency model (except current behavior i guess) would benefit 
from being customizable.

bq. So, rather than have the client ask for level of 'consistency' in the API, 
instead, the replica interaction would be set on client construction dependent 
on the plugin supplied?
Either "rather" or "both", I guess. If we could say that level of consistency 
(strong, timeline or quorum-strong) could be defined in config files per-client 
(not per-operations), we would be able to avoid having this enum. But we 
consider that being able to define consistency level per-operation is 
mandatory, right?

In that case I'm thinking of the following model:
 - deploy pluggable policy at client side which which decide on RPC requests, 
this policy would be used globally for all requests as default
 - consider the Consistency enum (and point it out in both user and dev level 
docs) as a "hint", used only to be able to customize individual scans or gets, 
and probably add note in class documentation that cluster may ignore the flag 
set if the feature isn't available?
 - current timeline consistency model doesn't assume quorums for write, so I 
think it makes sense to add QUORUM_STRONG in enum.
 
Thoughts?


> HBase read high-availability using timeline-consistent region replicas
> ----------------------------------------------------------------------
>
>                 Key: HBASE-10070
>                 URL: https://issues.apache.org/jira/browse/HBASE-10070
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>         Attachments: HighAvailabilityDesignforreadsApachedoc.pdf
>
>
> In the present HBase architecture, it is hard, probably impossible, to 
> satisfy constraints like 99th percentile of the reads will be served under 10 
> ms. One of the major factors that affects this is the MTTR for regions. There 
> are three phases in the MTTR process - detection, assignment, and recovery. 
> Of these, the detection is usually the longest and is presently in the order 
> of 20-30 seconds. During this time, the clients would not be able to read the 
> region data.
> However, some clients will be better served if regions will be available for 
> reads during recovery for doing eventually consistent reads. This will help 
> with satisfying low latency guarantees for some class of applications which 
> can work with stale reads.
> For improving read availability, we propose a replicated read-only region 
> serving design, also referred as secondary regions, or region shadows. 
> Extending current model of a region being opened for reads and writes in a 
> single region server, the region will be also opened for reading in region 
> servers. The region server which hosts the region for reads and writes (as in 
> current case) will be declared as PRIMARY, while 0 or more region servers 
> might be hosting the region as SECONDARY. There may be more than one 
> secondary (replica count > 2).
> Will attach a design doc shortly which contains most of the details and some 
> thoughts about development approaches. Reviews are more than welcome. 
> We also have a proof of concept patch, which includes the master and regions 
> server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to