[
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15588348#comment-15588348
]
Yu Li commented on HBASE-16698:
-------------------------------
Perf data for one single region:
Test environment (nothing changed but no presplit on the target table):
{noformat}
YCSB 0.7.0
4 physical client nodes, 8 YCSB processes per node, 32 threads per YCSB process
recordcount=3,200,000, fieldcount=1, fieldlength=1024, insertproportion=1,
requestdistribution=uniform
1 single RS, 1 single region (no presplit), handlercount=128,
hbase.wal.storage.policy=ALL_SSD
{noformat}
And the comparison data (one round):
||TestCase||Throughput||AverageLatency(us)||
|w/o patch|69924.42|14544.38|
|w patch|86373.70|11770.09|
>From the result we could see even with one single region, performance w/ patch
>is better under high concurrency, which indicates that the discruptor publish
>and consume processing is more time-costing than the lock.
I could see less CountDownLatch waiting in jstack during testing w/o patch,
which could explain why the throughput is better than that against multiple
regions.
[~chenheng] FYI.
> Performance issue: handlers stuck waiting for CountDownLatch inside
> WALKey#getWriteEntry under high writing workload
> --------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-16698
> URL: https://issues.apache.org/jira/browse/HBASE-16698
> Project: HBase
> Issue Type: Improvement
> Components: Performance
> Affects Versions: 1.2.3
> Reporter: Yu Li
> Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-16698.branch-1.patch,
> HBASE-16698.branch-1.v2.patch, HBASE-16698.branch-1.v2.patch,
> HBASE-16698.patch, HBASE-16698.v2.patch, hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call
> {{WALKey#getWriteEntry}} after appending edit to WAL, and
> {{seqNumAssignedLatch}} is only released when the relative append call is
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}).
> Because currently we're using a single event handler for the ringbuffer, the
> append calls are handled one by one (actually lot's of our current logic
> depending on this sequential dealing logic), and this becomes a bottleneck
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on
> all regions are dealt with in sequential, which causes contention among
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism,
> that we could grab the WriteEntry before publishing append onto ringbuffer
> and use it as sequence id, only that we need to add a lock to make "grab
> WriteEntry" and "append edit" a transaction. This will still cause contention
> inside a region but could avoid contention between different regions. This
> solution is already verified in our online environment and proved to be
> effective.
> Notice that for master (2.0) branch since we already change the write
> pipeline to sync before writing memstore (HBASE-15158), this issue only
> exists for the ASYNC_WAL writes scenario.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)