[
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14909886#comment-14909886
]
Hadoop QA commented on HBASE-14463:
-----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12762608/HBASE-14463_v5.patch
against master branch at commit 526520de0a9d7a29fcf1b4c521f017ca75a46cbc.
ATTACHMENT ID: 12762608
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 8 new
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 protoc{color}. The applied patch does not increase the
total number of protoc compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 checkstyle{color}. The applied patch does not increase the
total number of checkstyle errors
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 lineLengths{color}. The patch does not introduce lines
longer than 100
{color:green}+1 site{color}. The mvn post-site goal succeeds with this patch.
{color:red}-1 core tests{color}. The patch failed these unit tests:
org.apache.hadoop.hbase.client.TestShell
org.apache.hadoop.hbase.client.TestReplicationShell
{color:red}-1 core zombie tests{color}. There are 1 zombie test(s):
at
org.apache.hadoop.mapred.TestFixedLengthInputFormat.testFormatCompressedIn(TestFixedLengthInputFormat.java:90)
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/15776//testReport/
Release Findbugs (version 2.0.3) warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/15776//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors:
https://builds.apache.org/job/PreCommit-HBASE-Build/15776//artifact/patchprocess/checkstyle-aggregate.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/15776//console
This message is automatically generated.
> Severe performance downgrade when parallel reading a single key from
> BucketCache
> --------------------------------------------------------------------------------
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.98.14, 1.1.2
> Reporter: Yu Li
> Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14463.patch, HBASE-14463_v2.patch,
> HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch,
> TestBucketCache-new_with_IdLock.png,
> TestBucketCache-new_with_IdReadWriteLock.png,
> TestBucketCache_with_IdLock.png,
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png,
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these
> features, and supply the outputs to our online search engine. In such
> scenario we will launch hundreds of yarn workers and each worker will read
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc
> issue, and just as titled we have observed severe performance downgrade.
> After some analytics we found the root cause is the lock in
> BucketCache#getBlock, as shown below
> {code}
> try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
> // ...
> int len = bucketEntry.getLength();
> Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
> bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the
> operation in LruCache. And since we're using synchronized in
> IdLock#getLockEntry, parallel read dropping on the same bucket would be
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)