[
https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771587#comment-13771587
]
stack commented on HBASE-8755:
------------------------------
I was wrong that HLogPE called doWrite. It calls append as an HRegionServer
would.
Here are the numbers. They confirm what [~zjushch] found way back up top of
this issue.
I was running HLogPE against a five node HDFS cluster. The DataNodes were
persisting to a fusionio drive so little friction at the drive.
Below are seconds elapsed running following:
{code}
$ for i in 1 5 50; do for j in 1 2 3; do ./bin/hbase --config
/home/stack/conf_hbase
org.apache.hadoop.hbase.regionserver.wal.HLogPerformanceEvaluation -verify
-threads "${i}" -iterations 1000000 -nocleanup -keySize 50 -valueSize 100 &>
/tmp/log-patch"${i}"."${j}".txt; done; done
{code}
||Thread Count||WithoutPatch||WithPatch||%diff||
|1|991.258|1125.208|-11.90|
|1|924.881|1137.754|-18.70|
|1|979.312|1142.959|-14.31|
|5|950.968|1914.448|-50.32|
|5|939.312|1918.188|-51.03|
|5|947.183|1939.806|-51.17|
|50|2960.095|1918.808|54.26|
|50|2924.844|1933.020|51.30|
|50|2927.617|1955.358|49.72|
So, about 20% slower when single threaded writes. About 50% slower when five
threads writing concurrently BUT 50% faster when 50 concurrent threads (our
current default).
Can we have the best of both worlds somehow where we switch to this new model
when high contention?
> A new write thread model for HLog to improve the overall HBase write
> throughput
> -------------------------------------------------------------------------------
>
> Key: HBASE-8755
> URL: https://issues.apache.org/jira/browse/HBASE-8755
> Project: HBase
> Issue Type: Improvement
> Components: Performance, wal
> Reporter: Feng Honghua
> Assignee: stack
> Priority: Critical
> Fix For: 0.96.1
>
> Attachments: 8755trunkV2.txt, HBASE-8755-0.94-V0.patch,
> HBASE-8755-0.94-V1.patch, HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch
>
>
> In current write model, each write handler thread (executing put()) will
> individually go through a full 'append (hlog local buffer) => HLog writer
> append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write,
> which incurs heavy race condition on updateLock and flushLock.
> The only optimization where checking if current syncTillHere > txid in
> expectation for other thread help write/sync its own txid to hdfs and
> omitting the write/sync actually help much less than expectation.
> Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi
> proposed a new write thread model for writing hdfs sequence file and the
> prototype implementation shows a 4X improvement for throughput (from 17000 to
> 70000+).
> I apply this new write thread model in HLog and the performance test in our
> test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1
> RS, from 22000 to 70000 for 5 RS), the 1 RS write throughput (1K row-size)
> even beats the one of BigTable (Precolator published in 2011 says Bigtable's
> write throughput then is 31002). I can provide the detailed performance test
> results if anyone is interested.
> The change for new write thread model is as below:
> 1> All put handler threads append the edits to HLog's local pending buffer;
> (it notifies AsyncWriter thread that there is new edits in local buffer)
> 2> All put handler threads wait in HLog.syncer() function for underlying
> threads to finish the sync that contains its txid;
> 3> An single AsyncWriter thread is responsible for retrieve all the buffered
> edits in HLog's local pending buffer and write to the hdfs
> (hlog.writer.append); (it notifies AsyncFlusher thread that there is new
> writes to hdfs that needs a sync)
> 4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs
> to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread
> that sync watermark increases)
> 5> An single AsyncNotifier thread is responsible for notifying all pending
> put handler threads which are waiting in the HLog.syncer() function
> 6> No LogSyncer thread any more (since there is always
> AsyncWriter/AsyncFlusher threads do the same job it does)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira