[
https://issues.apache.org/jira/browse/HBASE-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484685#comment-14484685
]
Nick Dimiduk commented on HBASE-13407:
--------------------------------------
Nice find [~esteban]. Can you suggest any other ways we can avoid the herd of
regions stampeding a region server?
> Add a configurable jitter to MemStoreFlusher#FlushHandler in order to smooth
> write latency
> ------------------------------------------------------------------------------------------
>
> Key: HBASE-13407
> URL: https://issues.apache.org/jira/browse/HBASE-13407
> Project: HBase
> Issue Type: Improvement
> Reporter: Esteban Gutierrez
> Assignee: Esteban Gutierrez
> Attachments: memstoreflush.png
>
>
> There is a very interesting behavior that I can reproduce consistently with
> many workloads from HBase 0.98 to HBase 1.0 since hbase.hstore.flusher.count
> was set by default to 2: when writes are evenly distributed across regions,
> memstores grow and flush about the same rate causing spikes in IO and CPU.
> The side effect of those spikes is loss in throughput which in some cases can
> above 10% impacting write metrics. When the flushes get a out of sync the
> spikes lower and and throughput is very stable. Reverting
> hbase.hstore.flusher.count to 1 doesn't help too much with write heavy
> workloads since we end with a large flush queue that eventually can block
> writers.
> Adding a small configurable jitter
> hbase.server.thread.wakefrequency.jitter.pct (a percentage of the
> hbase.server.thread.wakefrequency frequency) can help to stagger the writes
> from FlushHandler to HDFS and smooth the write latencies when the memstores
> are flushed in multiple threads.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)