[
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15586429#comment-15586429
]
stack commented on HBASE-16698:
-------------------------------
I ran the WALPE too before and after this patch went into branch-1. Shows a
consistent minor improvement all the ways from 1 thread up through 5, 25, and
100 concurrent writers that ranges from about 2-4% less time to complete test.
At 100 threads there are less context switches... 176k vs 183k.
{code}
for i in 1 5 25 100; do for j in 1; do export
HBASE_CLASSPATH_PREFIX=`pwd`/hbase/lib/hbase-server-1.4.0-SNAPSHOT-tests.jar ;
./hbase/bin/hbase --config conf_hbase classpath; perf stat ./hbase/bin/hbase
--config /home/stack/conf_hbase
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation -threads $i -iterations
1000000 -keySize 50 -valueSize 100 &> "/tmp/baseline${i}.${j}.txt"; done; done
{code}
> Performance issue: handlers stuck waiting for CountDownLatch inside
> WALKey#getWriteEntry under high writing workload
> --------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-16698
> URL: https://issues.apache.org/jira/browse/HBASE-16698
> Project: HBase
> Issue Type: Improvement
> Components: Performance
> Affects Versions: 1.2.3
> Reporter: Yu Li
> Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-16698.branch-1.patch,
> HBASE-16698.branch-1.v2.patch, HBASE-16698.branch-1.v2.patch,
> HBASE-16698.patch, HBASE-16698.v2.patch, hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call
> {{WALKey#getWriteEntry}} after appending edit to WAL, and
> {{seqNumAssignedLatch}} is only released when the relative append call is
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}).
> Because currently we're using a single event handler for the ringbuffer, the
> append calls are handled one by one (actually lot's of our current logic
> depending on this sequential dealing logic), and this becomes a bottleneck
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on
> all regions are dealt with in sequential, which causes contention among
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism,
> that we could grab the WriteEntry before publishing append onto ringbuffer
> and use it as sequence id, only that we need to add a lock to make "grab
> WriteEntry" and "append edit" a transaction. This will still cause contention
> inside a region but could avoid contention between different regions. This
> solution is already verified in our online environment and proved to be
> effective.
> Notice that for master (2.0) branch since we already change the write
> pipeline to sync before writing memstore (HBASE-15158), this issue only
> exists for the ASYNC_WAL writes scenario.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)