[ 
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-14463:
--------------------------
    Description: 
We store feature data of online items in HBase, do machine learning on these 
features, and supply the outputs to our online search engine. In such scenario 
we will launch hundreds of yarn workers and each worker will read all features 
of one item(i.e. single rowkey in HBase), so there'll be heavy parallel reading 
on a single rowkey.

We were using LruCache but start to try BucketCache recently to resolve gc 
issue, and just as titled we have observed severe performance downgrade. After 
some analytics we found the root cause is the lock in BucketCache#getBlock, as 
shown below
{code}
      try {
        lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
        // ...
        if (bucketEntry.equals(backingMap.get(key))) {
          // ...
          int len = bucketEntry.getLength();
          Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
              bucketEntry.deserializerReference(this.deserialiserMap));
{code}

Since ioEnging.read involves array copy, it's much more time-costed than the 
operation in LruCache. And since we're using synchronized in 
IdLock#getLockEntry, parallel read dropping on the same bucket would be 
executed in serial, which causes a really bad performance.

To resolve the problem, we propose to use ReentranceReadWriteLock in 
BucketCache, and introduce a new class called IdReadWriteLock to implement it.

  was:
We store feature data of online items in HBase, do machine learning on these 
features, and supply the outputs to our online search engine. In such scenario 
we will launch hundreds of yarn workers and each worker will read all features 
of one item(i.e. single rowkey in HBase), so there'll be heavy parallel reading 
on a single rowkey.

We were using BlockCache but start to try BucketCache recently to resolve gc 
issue, and just as titled we have observed severe performance downgrade. After 
some analytics we found the root cause is the lock in BucketCache#getBlock, as 
shown below
{code}
      try {
        lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
        // ...
        if (bucketEntry.equals(backingMap.get(key))) {
          // ...
          int len = bucketEntry.getLength();
          Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
              bucketEntry.deserializerReference(this.deserialiserMap));
{code}

Since ioEnging.read involves array copy, it's much more time-costed than the 
operation in LruCache. And since we're using synchronized in 
IdLock#getLockEntry, parallel read dropping on the same bucket would be 
executed in serial, which causes a really bad performance.

To resolve the problem, we propose to use ReentranceReadWriteLock in 
BucketCache, and introduce a new class called IdReadWriteLock to implement it.


> Severe performance downgrade when parallel reading a single key from 
> BucketCache
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-14463
>                 URL: https://issues.apache.org/jira/browse/HBASE-14463
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.98.14, 1.1.2
>            Reporter: Yu Li
>            Assignee: Yu Li
>             Fix For: 2.0.0, 1.3.0
>
>         Attachments: HBASE-14463.patch, TestBucketCache_with_IdLock.png, 
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these 
> features, and supply the outputs to our online search engine. In such 
> scenario we will launch hundreds of yarn workers and each worker will read 
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy 
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc 
> issue, and just as titled we have observed severe performance downgrade. 
> After some analytics we found the root cause is the lock in 
> BucketCache#getBlock, as shown below
> {code}
>       try {
>         lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
>         // ...
>         if (bucketEntry.equals(backingMap.get(key))) {
>           // ...
>           int len = bucketEntry.getLength();
>           Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
>               bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the 
> operation in LruCache. And since we're using synchronized in 
> IdLock#getLockEntry, parallel read dropping on the same bucket would be 
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in 
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to