[
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13837247#comment-13837247
]
Devaraj Das commented on HBASE-10070:
-------------------------------------
[~vrodionov], we'd like to have the clients see the least downtime for their
queries when the primary is not reachable for any reason (including temporary
network partition). We want to be doubly sure that we are marking the server
dead at the appropriate time - not too soon and not too late. That's why 20
seconds or so in a cluster of, say, 100 nodes, seems like a good value for a
session timeout. Also, in practice there have seen cases where a node appears
to be fine but then in reality it isn't (faulty disk and things like that) and
that increases the latency of the responses. We are trying to address the use
case where clients are willing to (knowingly)tolerate the staleness of the
reads.
But yeah we should be able to poll for existence of the RS process/node (from a
separate process local or remote) and remove the ZK node when we discover that
the RS process is down. Discussions around these issues are in HBASE-5843.
> HBase read high-availability using eventually consistent region replicas
> ------------------------------------------------------------------------
>
> Key: HBASE-10070
> URL: https://issues.apache.org/jira/browse/HBASE-10070
> Project: HBase
> Issue Type: New Feature
> Reporter: Enis Soztutar
> Assignee: Enis Soztutar
> Attachments: HighAvailabilityDesignforreadsApachedoc.pdf
>
>
> In the present HBase architecture, it is hard, probably impossible, to
> satisfy constraints like 99th percentile of the reads will be served under 10
> ms. One of the major factors that affects this is the MTTR for regions. There
> are three phases in the MTTR process - detection, assignment, and recovery.
> Of these, the detection is usually the longest and is presently in the order
> of 20-30 seconds. During this time, the clients would not be able to read the
> region data.
> However, some clients will be better served if regions will be available for
> reads during recovery for doing eventually consistent reads. This will help
> with satisfying low latency guarantees for some class of applications which
> can work with stale reads.
> For improving read availability, we propose a replicated read-only region
> serving design, also referred as secondary regions, or region shadows.
> Extending current model of a region being opened for reads and writes in a
> single region server, the region will be also opened for reading in region
> servers. The region server which hosts the region for reads and writes (as in
> current case) will be declared as PRIMARY, while 0 or more region servers
> might be hosting the region as SECONDARY. There may be more than one
> secondary (replica count > 2).
> Will attach a design doc shortly which contains most of the details and some
> thoughts about development approaches. Reviews are more than welcome.
> We also have a proof of concept patch, which includes the master and regions
> server side of changes. Client side changes will be coming soon as well.
--
This message was sent by Atlassian JIRA
(v6.1#6144)