[ 
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966389#comment-14966389
 ] 

Jingcheng Du commented on HBASE-14463:
--------------------------------------

It uses the WeakObjectPool, the lock can be collected by gc very fast if 
threads don't request the same lock. In acquireLock it has to purges the null 
reference from the pool every time. Maybe this slow down the performance.
Could we just add some threshold to the purge method, for example if the cached 
objects are larger than a threshold, we purge, otherwise, we don't. We can add 
such things to WeakObjectPool I think.

> Severe performance downgrade when parallel reading a single key from 
> BucketCache
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-14463
>                 URL: https://issues.apache.org/jira/browse/HBASE-14463
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.98.14, 1.1.2
>            Reporter: Yu Li
>            Assignee: Yu Li
>             Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
>         Attachments: GC_with_WeakObjectPool.png, HBASE-14463.patch, 
> HBASE-14463_v11.patch, HBASE-14463_v12.patch, HBASE-14463_v2.patch, 
> HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch, 
> TestBucketCache-new_with_IdLock.png, 
> TestBucketCache-new_with_IdReadWriteLock.png, 
> TestBucketCache_with_IdLock-latest.png, TestBucketCache_with_IdLock.png, 
> TestBucketCache_with_IdReadWriteLock-latest.png, 
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, 
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these 
> features, and supply the outputs to our online search engine. In such 
> scenario we will launch hundreds of yarn workers and each worker will read 
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy 
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc 
> issue, and just as titled we have observed severe performance downgrade. 
> After some analytics we found the root cause is the lock in 
> BucketCache#getBlock, as shown below
> {code}
>       try {
>         lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
>         // ...
>         if (bucketEntry.equals(backingMap.get(key))) {
>           // ...
>           int len = bucketEntry.getLength();
>           Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
>               bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the 
> operation in LruCache. And since we're using synchronized in 
> IdLock#getLockEntry, parallel read dropping on the same bucket would be 
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in 
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to