[
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14935225#comment-14935225
]
Yu Li commented on HBASE-14463:
-------------------------------
A summary of the recent back and forth discussion and changes:
Each block in the bucket cache has an offset recording its position in the
buckets, and it's used to get/evict the block from the cache. By design we
couldn't evict a block in read, so we need a lock (in this JIRA we use read
write lock to replace synchronize) as the gate-keeper for each offset, and such
offset->lock pair (i.e. "entry") are stored in a hash map.
To avoid garbages, efforts are paid to remove the entry from the map when it's
our of usage, but after removing synchronized block the original patch
introduced a lock leak issue. Patch v3~v5 tried to resolve the problem but
either with performance issue or went in the wrong way.
The [latest patch in
rb|https://reviews.apache.org/r/38626/diff/10#index_header] (not attached here
since it depends on some non-committed code) resolved the lock leak problem by
saving the remove action instead of dealing with synchronization on it. It uses
WeakObjectPool introduced in HBASE-14268 which utilizes weak reference object
to let system gc clear the non-used entries automatically, meantime making sure
lock in use won't got removed (gc won't clear object with strong reference).
This should be a complete solution to the only issue left here, and I think the
patch is ready to go after HBASE-14268 got in
Let me know if different thoughts, thanks.
> Severe performance downgrade when parallel reading a single key from
> BucketCache
> --------------------------------------------------------------------------------
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.98.14, 1.1.2
> Reporter: Yu Li
> Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14463.patch, HBASE-14463_v2.patch,
> HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch,
> TestBucketCache-new_with_IdLock.png,
> TestBucketCache-new_with_IdReadWriteLock.png,
> TestBucketCache_with_IdLock.png,
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png,
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these
> features, and supply the outputs to our online search engine. In such
> scenario we will launch hundreds of yarn workers and each worker will read
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc
> issue, and just as titled we have observed severe performance downgrade.
> After some analytics we found the root cause is the lock in
> BucketCache#getBlock, as shown below
> {code}
> try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
> // ...
> int len = bucketEntry.getLength();
> Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
> bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the
> operation in LruCache. And since we're using synchronized in
> IdLock#getLockEntry, parallel read dropping on the same bucket would be
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)