[
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968374#comment-14968374
]
Yu Li commented on HBASE-14463:
-------------------------------
{quote}
The micro benchmark numbers you mentioned above - That is in trunk code base?
{quote}
Yes, it's the result of TestBucketCache#testCacheMultiThreadedSingleKey, in
trunk code base.
{quote}
In trunk this is no longer true
{quote}
Agree, but UT shows the change from IdLock to IdReadWriteLock still benifits
Regarding the PE result, I tried to reproduce it on my local, also with 100GB
data and multi get with 25 threads, cmdline like:
{noformat}
bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred
--multiGet=20 randomRead 25
{noformat}
Each case for 3 times and get the average result to limit the deviation. And
here is the result:
{noformat}
before patch:
Min: 102832ms Max: 103637ms Avg: 103394ms
Min: 101003ms Max: 101836ms Avg: 101564ms
Min: 101464ms Max: 102498ms Avg: 102254ms
Average: 102404ms
after v12 patch:
Min: 99542ms Max: 100141ms Avg: 99932ms
Min: 102927ms Max: 103863ms Avg: 103580ms
Min: 104310ms Max: 104880ms Avg: 104683ms
Average: 102731ms
with slow purge (when lockPool size reach 500):
Min: 100144ms Max: 101144ms Avg: 100871ms
Min: 103764ms Max: 104588ms Avg: 104350ms
Min: 101310ms Max: 102244ms Avg: 102010ms
Average: 102410ms
{noformat}
>From which I could see a bigger stddev after patch but similar average. Refer
>to the attached PE output for more details.
[~anoop.hbase], is my testing the same as yours? Any difference or maybe you
just ran once for each case and it happened to be best case w/o patch and worst
w/ patch? Just let me know your thoughts, thanks.
> Severe performance downgrade when parallel reading a single key from
> BucketCache
> --------------------------------------------------------------------------------
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.98.14, 1.1.2
> Reporter: Yu Li
> Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: GC_with_WeakObjectPool.png, HBASE-14463.patch,
> HBASE-14463_v11.patch, HBASE-14463_v12.patch, HBASE-14463_v2.patch,
> HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch,
> TestBucketCache-new_with_IdLock.png,
> TestBucketCache-new_with_IdReadWriteLock.png,
> TestBucketCache_with_IdLock-latest.png, TestBucketCache_with_IdLock.png,
> TestBucketCache_with_IdReadWriteLock-latest.png,
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png,
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these
> features, and supply the outputs to our online search engine. In such
> scenario we will launch hundreds of yarn workers and each worker will read
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc
> issue, and just as titled we have observed severe performance downgrade.
> After some analytics we found the root cause is the lock in
> BucketCache#getBlock, as shown below
> {code}
> try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
> // ...
> int len = bucketEntry.getLength();
> Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
> bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the
> operation in LruCache. And since we're using synchronized in
> IdLock#getLockEntry, parallel read dropping on the same bucket would be
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)