[
https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792301#comment-13792301
]
Feng Honghua commented on HBASE-8755:
-------------------------------------
[~stack]: Yes, seems this patch against trunk has obvious downgrade compared to
against 0.94.3(our internal branch) for:
1) 5 threads: withPatch has worse perf than withoutPatch (33% ops downgrade,
4.2K vs. 6.3K)
2) 100 threads: withpatch's perf is about 2.5X of withoutPatch (46.6K vs.
19.2K)
A short summary:
1) withoutPatch, the max ops of HLog is less than 20K (19K for trunk and 17K
for 0.94.3)
2) withPatch, the max ops of HLog is more than 45K (46K for trunk and 68K for
0.94.3)
3) for trunk, withPatch can have even worse perf than withoutPatch (about 33%
downgrade)
We'll try to figure out why withPatch performs worse than withoutPatch for
trunk, and try to ensure the performance is about equal when stress is low and
still keep obvious upgrade when stress is high. :-)
[~stack] : would you please redo the test using 75/100 threads to re-confirm
whether the ops upgrade matches our tests? (we see 37K vs 16K for 75 threads
and 46K vs 19K for 100 threads)
[~zjushch] : what version of HBase do you apply the patch to? trunk or 0.94? I
wonder if the reason is the same for our difference and stack's
> A new write thread model for HLog to improve the overall HBase write
> throughput
> -------------------------------------------------------------------------------
>
> Key: HBASE-8755
> URL: https://issues.apache.org/jira/browse/HBASE-8755
> Project: HBase
> Issue Type: Improvement
> Components: Performance, wal
> Reporter: Feng Honghua
> Assignee: stack
> Priority: Critical
> Fix For: 0.96.1
>
> Attachments: 8755trunkV2.txt, HBASE-8755-0.94-V0.patch,
> HBASE-8755-0.94-V1.patch, HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch
>
>
> In current write model, each write handler thread (executing put()) will
> individually go through a full 'append (hlog local buffer) => HLog writer
> append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write,
> which incurs heavy race condition on updateLock and flushLock.
> The only optimization where checking if current syncTillHere > txid in
> expectation for other thread help write/sync its own txid to hdfs and
> omitting the write/sync actually help much less than expectation.
> Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi
> proposed a new write thread model for writing hdfs sequence file and the
> prototype implementation shows a 4X improvement for throughput (from 17000 to
> 70000+).
> I apply this new write thread model in HLog and the performance test in our
> test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1
> RS, from 22000 to 70000 for 5 RS), the 1 RS write throughput (1K row-size)
> even beats the one of BigTable (Precolator published in 2011 says Bigtable's
> write throughput then is 31002). I can provide the detailed performance test
> results if anyone is interested.
> The change for new write thread model is as below:
> 1> All put handler threads append the edits to HLog's local pending buffer;
> (it notifies AsyncWriter thread that there is new edits in local buffer)
> 2> All put handler threads wait in HLog.syncer() function for underlying
> threads to finish the sync that contains its txid;
> 3> An single AsyncWriter thread is responsible for retrieve all the buffered
> edits in HLog's local pending buffer and write to the hdfs
> (hlog.writer.append); (it notifies AsyncFlusher thread that there is new
> writes to hdfs that needs a sync)
> 4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs
> to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread
> that sync watermark increases)
> 5> An single AsyncNotifier thread is responsible for notifying all pending
> put handler threads which are waiting in the HLog.syncer() function
> 6> No LogSyncer thread any more (since there is always
> AsyncWriter/AsyncFlusher threads do the same job it does)
--
This message was sent by Atlassian JIRA
(v6.1#6144)