[
https://issues.apache.org/jira/browse/HBASE-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Joep Rottinghuis updated HBASE-17018:
-------------------------------------
Attachment: HBASE-17018.master.004.patch
Attaching patch 4 to incorporate feedback from Ted.
Added unit tests for size-based flushing as well as size-based put validation.
Added RB: https://reviews.apache.org/r/54882/
Still much work to do...
> Spooling BufferedMutator
> ------------------------
>
> Key: HBASE-17018
> URL: https://issues.apache.org/jira/browse/HBASE-17018
> Project: HBase
> Issue Type: New Feature
> Reporter: Joep Rottinghuis
> Attachments: HBASE-17018.master.001.patch,
> HBASE-17018.master.002.patch, HBASE-17018.master.003.patch,
> HBASE-17018.master.004.patch,
> HBASE-17018SpoolingBufferedMutatorDesign-v1.pdf, YARN-4061 HBase requirements
> for fault tolerant writer.pdf
>
>
> For Yarn Timeline Service v2 we use HBase as a backing store.
> A big concern we would like to address is what to do if HBase is
> (temporarily) down, for example in case of an HBase upgrade.
> Most of the high volume writes will be mostly on a best-effort basis, but
> occasionally we do a flush. Mainly during application lifecycle events,
> clients will call a flush on the timeline service API. In order to handle the
> volume of writes we use a BufferedMutator. When flush gets called on our API,
> we in turn call flush on the BufferedMutator.
> We would like our interface to HBase be able to spool the mutations to a
> filesystems in case of HBase errors. If we use the Hadoop filesystem
> interface, this can then be HDFS, gcs, s3, or any other distributed storage.
> The mutations can then later be re-played, for example through a MapReduce
> job.
> https://reviews.apache.org/r/54882/
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)