[
https://issues.apache.org/jira/browse/HBASE-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14257393#comment-14257393
]
Carter commented on HBASE-12728:
--------------------------------
So I've been back and forth on this and have come to the same conclusion as
[~lhofhansl]. I’ll even be a little more blunt and assert that auto-flush is
categorically bad. It leads to an illusion of free speed gains when, in fact,
you sacrifice durability guarantees, a basic tenet of a database.
If a user makes a decision to sacrifice durability for speed, that can be done
without help from the HBase client. Create a thread-safe object wrapping two
lists, one for buffering, and one for updating. That object manages a thread
that gets triggered with the collection reaches a certain length or when a
timeout expires, at which point it moves the buffered list into the update list
and submits the batch.
If this is a common enough use case, then HBase could even package this as a
utility for those who need it. Something like this:
{code:java}
public class PutBuffer {
public PutBuffer(ConnectionFactory connectionFactory) {…}
public void setFlushTimeout(int millis) {…}
public int getFlushTimeout() {…}
public void addPut(TableName table, Put put) {…}
public void flush() {…}
}
{code}
But putting this as first-class functionality baked into the client itself is
inherently confusing (“where did my writes go when the servlet crashed?”) and
adds a lot of unnecessary complexity to the critical write path. It's a
complex-enough system already. Keep it simple wherever possible.
My $0.02.
> buffered writes substantially less useful after removal of HTablePool
> ---------------------------------------------------------------------
>
> Key: HBASE-12728
> URL: https://issues.apache.org/jira/browse/HBASE-12728
> Project: HBase
> Issue Type: Bug
> Components: hbase
> Affects Versions: 0.98.0
> Reporter: Aaron Beppu
>
> In previous versions of HBase, when use of HTablePool was encouraged, HTable
> instances were long-lived in that pool, and for that reason, if autoFlush was
> set to false, the table instance could accumulate a full buffer of writes
> before a flush was triggered. Writes from the client to the cluster could
> then be substantially larger and less frequent than without buffering.
> However, when HTablePool was deprecated, the primary justification seems to
> have been that creating HTable instances is cheap, so long as the connection
> and executor service being passed to it are pre-provided. A use pattern was
> encouraged where users should create a new HTable instance for every
> operation, using an existing connection and executor service, and then close
> the table. In this pattern, buffered writes are substantially less useful;
> writes are as small and as frequent as they would have been with
> autoflush=true, except the synchronous write is moved from the operation
> itself to the table close call which immediately follows.
> More concretely :
> ```
> // Given these two helpers ...
> private HTableInterface getAutoFlushTable(String tableName) throws
> IOException {
> // (autoflush is true by default)
> return storedConnection.getTable(tableName, executorService);
> }
> private HTableInterface getBufferedTable(String tableName) throws IOException
> {
> HTableInterface table = getAutoFlushTable(tableName);
> table.setAutoFlush(false);
> return table;
> }
> // it's my contention that these two methods would behave almost identically,
> // except the first will hit a synchronous flush during the put call,
> and the second will
> // flush during the (hidden) close call on table.
> private void writeAutoFlushed(Put somePut) throws IOException {
> try (HTableInterface table = getAutoFlushTable(tableName)) {
> table.put(somePut); // will do synchronous flush
> }
> }
> private void writeBuffered(Put somePut) throws IOException {
> try (HTableInterface table = getBufferedTable(tableName)) {
> table.put(somePut);
> } // auto-close will trigger synchronous flush
> }
> ```
> For buffered writes to actually provide a performance benefit to users, one
> of two things must happen:
> - The writeBuffer itself shouldn't live, flush and die with the lifecycle of
> it's HTableInstance. If the writeBuffer were managed elsewhere and had a long
> lifespan, this could cease to be an issue. However, if the same writeBuffer
> is appended to by multiple tables, then some additional concurrency control
> will be needed around it.
> - Alternatively, there should be some pattern for having long-lived HTable
> instances. However, since HTable is not thread-safe, we'd need multiple
> instances, and a mechanism for leasing them out safely -- which sure sounds a
> lot like the old HTablePool to me.
> See discussion on mailing list here :
> http://mail-archives.apache.org/mod_mbox/hbase-user/201412.mbox/%3CCAPdJLkEzmUQZ_kvD%3D8mrxi4V%3DhCmUp3g9MUZsddD%2Bmon%2BAvNtg%40mail.gmail.com%3E
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)