[ 
https://issues.apache.org/jira/browse/HBASE-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15635154#comment-15635154
 ] 

Phil Yang commented on HBASE-17018:
-----------------------------------

An interesting and useful feature :) 

Cassandra has a feature/api called "atomic batches". If the client get response 
from server, we can say all mutations have be executed. If the request timeout, 
server can guarantee "all or nothing" -- all will be executed eventually or 
nothing will be executed. I think we can provide a similar feature to HBase 
users as a Table's API. And BufferedMutator can have an optional logic to 
switch to this API when the initial request failed.

For implementation, I think we'd better not use MR because users may not have 
MR for a HBase cluster, or at least RS should be able to replay mutations. If 
we save mutations to HDFS, the client will be like a RS that writes WAL to 
HDFS, right? We can use a logic just like distribute log replay to read log 
entries and use client api to write them to all region servers.

Thanks.

> Spooling BufferedMutator
> ------------------------
>
>                 Key: HBASE-17018
>                 URL: https://issues.apache.org/jira/browse/HBASE-17018
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: Joep Rottinghuis
>         Attachments: YARN-4061 HBase requirements for fault tolerant 
> writer.pdf
>
>
> For Yarn Timeline Service v2 we use HBase as a backing store.
> A big concern we would like to address is what to do if HBase is 
> (temporarily) down, for example in case of an HBase upgrade.
> Most of the high volume writes will be mostly on a best-effort basis, but 
> occasionally we do a flush. Mainly during application lifecycle events, 
> clients will call a flush on the timeline service API. In order to handle the 
> volume of writes we use a BufferedMutator. When flush gets called on our API, 
> we in turn call flush on the BufferedMutator.
> We would like our interface to HBase be able to spool the mutations to a 
> filesystems in case of HBase errors. If we use the Hadoop filesystem 
> interface, this can then be HDFS, gcs, s3, or any other distributed storage. 
> The mutations can then later be re-played, for example through a MapReduce 
> job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to